Skip to content
Snippets Groups Projects
  1. Jan 23, 2013
  2. Jan 18, 2013
  3. Jan 14, 2013
    • Tom Lane's avatar
      Improve handling of ereport(ERROR) and elog(ERROR). · b853eb97
      Tom Lane authored
      In commit 71450d7f, we added code to inform
      suitably-intelligent compilers that ereport() doesn't return if the elevel
      is ERROR or higher.  This patch extends that to elog(), and also fixes a
      double-evaluation hazard that the previous commit created in ereport(),
      as well as reducing the emitted code size.
      
      The elog() improvement requires the compiler to support __VA_ARGS__, which
      should be available in just about anything nowadays since it's required by
      C99.  But our minimum language baseline is still C89, so add a configure
      test for that.
      
      The previous commit assumed that ereport's elevel could be evaluated twice,
      which isn't terribly safe --- there are already counterexamples in xlog.c.
      On compilers that have __builtin_constant_p, we can use that to protect the
      second test, since there's no possible optimization gain if the compiler
      doesn't know the value of elevel.  Otherwise, use a local variable inside
      the macros to prevent double evaluation.  The local-variable solution is
      inferior because (a) it leads to useless code being emitted when elevel
      isn't constant, and (b) it increases the optimization level needed for the
      compiler to recognize that subsequent code is unreachable.  But it seems
      better than not teaching non-gcc compilers about unreachability at all.
      
      Lastly, if the compiler has __builtin_unreachable(), we can use that
      instead of abort(), resulting in a noticeable code savings since no
      function call is actually emitted.  However, it seems wise to do this only
      in non-assert builds.  In an assert build, continue to use abort(), so that
      the behavior will be predictable and debuggable if the "impossible"
      happens.
      
      These changes involve making the ereport and elog macros emit do-while
      statement blocks not just expressions, which forces small changes in
      a few call sites.
      
      Andres Freund, Tom Lane, Heikki Linnakangas
      b853eb97
  4. Jan 12, 2013
    • Andrew Dunstan's avatar
      Extend and improve use of EXTRA_REGRESS_OPTS. · 4ae5ee6c
      Andrew Dunstan authored
      This is now used by ecpg tests, and not clobbered by pg_upgrade
      tests. This change won't affect anything that doesn't set this
      environment variable, but will enable the buildfarm to control
      exactly what port regression test installs will be running on,
      and thus to detect possible rogue postmasters more easily.
      
      Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
      4ae5ee6c
  5. Jan 09, 2013
  6. Jan 07, 2013
    • Tatsuo Ishii's avatar
      Add new "-q" logging option (quiet mode) while in initialize mode · cf03ff6c
      Tatsuo Ishii authored
      (-i), producing only one progress message per 5 seconds along with
      elapsed time and estimated remaining time.  Also add elapsed time and
      estimated remaining time to the default logging(prints one message
      each 100000 rows).
      Patch contributed by Tomas Vondra, reviewed by Jeevan Chalke and
      Tatsuo Ishii.
      cf03ff6c
  7. Jan 04, 2013
    • Tom Lane's avatar
      Prevent creation of postmaster's TCP socket during pg_upgrade testing. · 78a5e738
      Tom Lane authored
      On non-Windows machines, we use the Unix socket for connections to test
      postmasters, so there is no need to create a TCP socket.  Furthermore,
      doing so causes failures due to port conflicts if two builds are carried
      out concurrently on one machine.  (If the builds are done in different
      chroots, which is standard practice at least in Red Hat distros, there
      is no risk of conflict on the Unix socket.)  Suppressing the TCP socket
      by setting listen_addresses to empty has long been standard practice
      for pg_regress, and pg_upgrade knows about this too ... but pg_upgrade's
      test.sh didn't get the memo.
      
      Back-patch to 9.2, and also sync the 9.2 version of the script with HEAD
      as much as practical.
      78a5e738
  8. Jan 03, 2013
  9. Jan 01, 2013
  10. Dec 27, 2012
  11. Dec 20, 2012
  12. Dec 11, 2012
    • Bruce Momjian's avatar
      Fix pg_upgrade for invalid indexes · e95c4bd1
      Bruce Momjian authored
      All versions of pg_upgrade upgraded invalid indexes caused by CREATE
      INDEX CONCURRENTLY failures and marked them as valid.  The patch adds a
      check to all pg_upgrade versions and throws an error during upgrade or
      --check.
      
      Backpatch to 9.2, 9.1, 9.0.  Patch slightly adjusted.
      e95c4bd1
    • Andrew Dunstan's avatar
      Add mode where contrib installcheck runs each module in a separately named database. · ad69bd05
      Andrew Dunstan authored
      Normally each module is tested in a database named contrib_regression,
      which is dropped and recreated at the beginhning of each pg_regress run.
      This new mode, enabled by adding USE_MODULE_DB=1 to the make command
      line, runs most modules in a database with the module name embedded in
      it.
      
      This will make testing pg_upgrade on clusters with the contrib modules
      a lot easier.
      
      Second attempt at this, this time accomodating make versions older
      than 3.82.
      
      Still to be done: adapt to the MSVC build system.
      
      Backpatch to 9.0, which is the earliest version it is reasonably
      possible to test upgrading from.
      ad69bd05
    • Bruce Momjian's avatar
      Fix pg_upgrade -O/-o options · acdb8c22
      Bruce Momjian authored
      Fix previous commit that added synchronous_commit=off, but broke -O/-o
      due to missing space in argument passing.
      
      Backpatch to 9.2.
      acdb8c22
  13. Dec 07, 2012
    • Bruce Momjian's avatar
      Improve pg_upgrade's status display · 6dd95845
      Bruce Momjian authored
      Pg_upgrade displays file names during copy and database names during
      dump/restore.  Andrew Dunstan identified three bugs:
      
      *  long file names were being truncated to 60 _leading_ characters, which
         often do not change for long file names
      
      *  file names were truncated to 60 characters in log files
      
      *  carriage returns were being output to log files
      
      This commit fixes these --- it prints 60 _trailing_ characters to the
      status display, and full path names without carriage returns to log
      files.  It also suppresses status output to the log file unless verbose
      mode is used.
      6dd95845
  14. Dec 06, 2012
    • Alvaro Herrera's avatar
      Background worker processes · da07a1e8
      Alvaro Herrera authored
      Background workers are postmaster subprocesses that run arbitrary
      user-specified code.  They can request shared memory access as well as
      backend database connections; or they can just use plain libpq frontend
      database connections.
      
      Modules listed in shared_preload_libraries can register background
      workers in their _PG_init() function; this is early enough that it's not
      necessary to provide an extra GUC option, because the necessary extra
      resources can be allocated early on.  Modules can install more than one
      bgworker, if necessary.
      
      Care is taken that these extra processes do not interfere with other
      postmaster tasks: only one such process is started on each ServerLoop
      iteration.  This means a large number of them could be waiting to be
      started up and postmaster is still able to quickly service external
      connection requests.  Also, shutdown sequence should not be impacted by
      a worker process that's reasonably well behaved (i.e. promptly responds
      to termination signals.)
      
      The current implementation lets worker processes specify their start
      time, i.e. at what point in the server startup process they are to be
      started: right after postmaster start (in which case they mustn't ask
      for shared memory access), when consistent state has been reached
      (useful during recovery in a HOT standby server), or when recovery has
      terminated (i.e. when normal backends are allowed).
      
      In case of a bgworker crash, actions to take depend on registration
      data: if shared memory was requested, then all other connections are
      taken down (as well as other bgworkers), just like it were a regular
      backend crashing.  The bgworker itself is restarted, too, within a
      configurable timeframe (which can be configured to be never).
      
      More features to add to this framework can be imagined without much
      effort, and have been discussed, but this seems good enough as a useful
      unit already.
      
      An elementary sample module is supplied.
      
      Author: Álvaro Herrera
      
      This patch is loosely based on prior patches submitted by KaiGai Kohei,
      and unsubmitted code by Simon Riggs.
      
      Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund,
      Heikki Linnakangas, Simon Riggs, Amit Kapila
      da07a1e8
  15. Dec 05, 2012
  16. Dec 04, 2012
  17. Dec 03, 2012
  18. Dec 02, 2012
    • Andrew Dunstan's avatar
      Add mode where contrib installcheck runs each module in a separately named database. · e2b3c21b
      Andrew Dunstan authored
      Normally each module is tested in aq database named contrib_regression,
      which is dropped and recreated at the beginhning of each pg_regress run.
      This mode, enabled by adding USE_MODULE_DB=1 to the make command line,
      runs most modules in a database with the module name embedded in it.
      
      This will make testing pg_upgrade on clusters with the contrib modules
      a lot easier.
      
      Still to be done: adapt to the MSVC build system.
      
      Backpatch to 9.0, which is the earliest version it is reasonably possible
      to test upgrading from.
      e2b3c21b
  19. Dec 01, 2012
  20. Nov 30, 2012
  21. Nov 29, 2012
    • Tom Lane's avatar
      Fix assorted bugs in CREATE/DROP INDEX CONCURRENTLY. · 3c840464
      Tom Lane authored
      Commit 8cb53654, which introduced DROP
      INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor
      choice of catalog state representation.  The pg_index state for an index
      that's reached the final pre-drop stage was the same as the state for an
      index just created by CREATE INDEX CONCURRENTLY.  This meant that the
      (necessary) change to make RelationGetIndexList ignore about-to-die indexes
      also made it ignore freshly-created indexes; which is catastrophic because
      the latter do need to be considered in HOT-safety decisions.  Failure to
      do so leads to incorrect index entries and subsequently wrong results from
      queries depending on the concurrently-created index.
      
      To fix, add an additional boolean column "indislive" to pg_index, so that
      the freshly-created and about-to-die states can be distinguished.  (This
      change obviously is only possible in HEAD.  This patch will need to be
      back-patched, but in 9.2 we'll use a kluge consisting of overloading the
      formerly-impossible state of indisvalid = true and indisready = false.)
      
      In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index
      flag changes they make without exclusive lock on the index are made via
      heap_inplace_update() rather than a normal transactional update.  The
      latter is not very safe because moving the pg_index tuple could result in
      concurrent SnapshotNow scans finding it twice or not at all, thus possibly
      resulting in index corruption.  This is a pre-existing bug in CREATE INDEX
      CONCURRENTLY, which was copied into the DROP code.
      
      In addition, fix various places in the code that ought to check to make
      sure that the indexes they are manipulating are valid and/or ready as
      appropriate.  These represent bugs that have existed since 8.2, since
      a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid
      index behind, and we ought not try to do anything that might fail with
      such an index.
      
      Also fix RelationReloadIndexInfo to ensure it copies all the pg_index
      columns that are allowed to change after initial creation.  Previously we
      could have been left with stale values of some fields in an index relcache
      entry.  It's not clear whether this actually had any user-visible
      consequences, but it's at least a bug waiting to happen.
      
      In addition, do some code and docs review for DROP INDEX CONCURRENTLY;
      some cosmetic code cleanup but mostly addition and revision of comments.
      
      This will need to be back-patched, but in a noticeably different form,
      so I'm committing it to HEAD before working on the back-patch.
      
      Problem reported by Amit Kapila, diagnosis by Pavan Deolassee,
      fix by Tom Lane and Andres Freund.
      3c840464
  22. Nov 25, 2012
  23. Nov 19, 2012
  24. Nov 18, 2012
    • Tom Lane's avatar
      Limit values of archive_timeout, post_auth_delay, auth_delay.milliseconds. · b6e3798f
      Tom Lane authored
      The previous definitions of these GUC variables allowed them to range
      up to INT_MAX, but in point of fact the underlying code would suffer
      overflows or other errors with large values.  Reduce the maximum values
      to something that won't misbehave.  There's no apparent value in working
      harder than this, since very large delays aren't sensible for any of
      these.  (Note: the risk with archive_timeout is that if we're late
      checking the state, the timestamp difference it's being compared to
      might overflow.  So we need some amount of slop; the choice of INT_MAX/2
      is arbitrary.)
      
      Per followup investigation of bug #7670.  Although this isn't a very
      significant fix, might as well back-patch.
      b6e3798f
  25. Nov 15, 2012
Loading