Skip to content
Snippets Groups Projects
  1. Jul 15, 2014
  2. Jul 12, 2014
    • Magnus Hagander's avatar
      Add autocompletion of locale keywords for CREATE DATABASE · 2ead596c
      Magnus Hagander authored
      Adds support for autocomplete of LC_COLLATE and LC_CTYPE to
      the CREATE DATABASE command in psql.
      2ead596c
    • Tom Lane's avatar
      Fix bug with whole-row references to append subplans. · 261f954e
      Tom Lane authored
      ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
      TupleTableSlot just once per query.  But if the input is coming from an
      Append (or, perhaps, other cases?) more than one slot might be returned
      over the query run.  This led to "record type has not been registered"
      errors when a composite datum was extracted from a non-blessed slot.
      
      This bug has been there a long time; I guess it escaped notice because when
      dealing with subqueries the planner tends to expand whole-row Vars into
      RowExprs, which don't have the same problem.  It is possible to trigger
      the problem in all active branches, though, as illustrated by the added
      regression test.
      261f954e
  3. Jul 08, 2014
    • Tom Lane's avatar
      Don't assume a subquery's output is unique if there's a SRF in its tlist. · 189bd09c
      Tom Lane authored
      While the x output of "select x from t group by x" can be presumed unique,
      this does not hold for "select x, generate_series(1,10) from t group by x",
      because we may expand the set-returning function after the grouping step.
      (Perhaps that should be re-thought; but considering all the other oddities
      involved with SRFs in targetlists, it seems unlikely we'll change it.)
      Put a check in query_is_distinct_for() so it's not fooled by such cases.
      
      Back-patch to all supported branches.
      
      David Rowley
      189bd09c
  4. Jul 07, 2014
    • Bruce Momjian's avatar
      pg_upgrade: allow upgrades for new-only TOAST tables · 759c9fb6
      Bruce Momjian authored
      Previously, when calculations on the need for toast tables changed,
      pg_upgrade could not handle cases where the new cluster needed a TOAST
      table and the old cluster did not.  (It already handled the opposite
      case.)  This fixes the "OID mismatch" error typically generated in this
      case.
      
      Backpatch through 9.2
      759c9fb6
  5. Jul 02, 2014
    • Tom Lane's avatar
      Add some errdetail to checkRuleResultList(). · 981518ea
      Tom Lane authored
      This function wasn't originally thought to be really user-facing,
      because converting a table to a view isn't something we expect people
      to do manually.  So not all that much effort was spent on the error
      messages; in particular, while the code will complain that you got
      the column types wrong it won't say exactly what they are.  But since
      we repurposed the code to also check compatibility of rule RETURNING
      lists, it's definitely user-facing.  It now seems worthwhile to add
      errdetail messages showing exactly what the conflict is when there's
      a mismatch of column names or types.  This is prompted by bug #10836
      from Matthias Raffelsieper, which might have been forestalled if the
      error message had reported the wrong column type as being "record".
      
      Per Alvaro's advice, back-patch to branches before 9.4, but resist
      the temptation to rephrase any existing strings there.  Adding new
      strings is not really a translation degradation; anyway having the
      info presented in English is better than not having it at all.
      981518ea
  6. Jul 01, 2014
    • Tom Lane's avatar
      Fix inadequately-sized output buffer in contrib/unaccent. · c66256b9
      Tom Lane authored
      The output buffer size in unaccent_lexize() was calculated as input string
      length times pg_database_encoding_max_length(), which effectively assumes
      that replacement strings aren't more than one character.  While that was
      all that we previously documented it to support, the code actually has
      always allowed replacement strings of arbitrary length; so if you tried
      to make use of longer strings, you were at risk of buffer overrun.  To fix,
      use an expansible StringInfo buffer instead of trying to determine the
      maximum space needed a-priori.
      
      This would be a security issue if unaccent rules files could be installed
      by unprivileged users; but fortunately they can't, so in the back branches
      the problem can be labeled as improper configuration by a superuser.
      Nonetheless, a memory stomp isn't a nice way of reacting to improper
      configuration, so let's back-patch the fix.
      c66256b9
  7. Jun 30, 2014
  8. Jun 26, 2014
    • Tom Lane's avatar
      Back-patch "Fix EquivalenceClass processing for nested append relations". · 0cf16686
      Tom Lane authored
      When we committed a87c7291, we somehow
      failed to notice that it didn't merely improve plan quality for expression
      indexes; there were very closely related cases that failed outright with
      "could not find pathkey item to sort".  The failing cases seem to be those
      where the planner was already capable of selecting a MergeAppend plan,
      and there was inheritance involved: the lack of appropriate eclass child
      members would prevent prepare_sort_from_pathkeys() from succeeding on the
      MergeAppend's child plan nodes for inheritance child tables.
      
      Accordingly, back-patch into 9.1 through 9.3, along with an extra
      regression test case covering the problem.
      
      Per trouble report from Michael Glaesemann.
      0cf16686
    • Fujii Masao's avatar
      Remove obsolete example of CSV log file name from log_filename document. · 4ee45945
      Fujii Masao authored
      7380b638 changed log_filename so that epoch was not appended to it
      when no format specifier is given. But the example of CSV log file name
      with epoch still left in log_filename document. This commit removes
      such obsolete example.
      
      This commit also documents the defaults of log_directory and
      log_filename.
      
      Backpatch to all supported versions.
      
      Christoph Berg
      4ee45945
  9. Jun 24, 2014
    • Heikki Linnakangas's avatar
      Don't allow foreign tables with OIDs. · 1c9f9e88
      Heikki Linnakangas authored
      The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
      was still possible with default_with_oids=true. But the rest of the system,
      including pg_dump, isn't prepared to handle foreign tables with OIDs
      properly.
      
      Backpatch down to 9.1, where foreign tables were introduced. It's possible
      that there are databases out there that already have foreign tables with
      OIDs. There isn't much we can do about that, but at least we can prevent
      them from being created in the future.
      
      Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
      1c9f9e88
  10. Jun 21, 2014
    • Kevin Grittner's avatar
      Fix documentation template for CREATE TRIGGER. · 07353de4
      Kevin Grittner authored
      By using curly braces, the template had specified that one of
      "NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"
      was required on any CREATE TRIGGER statement, which is not
      accurate.  Change to square brackets makes that optional.
      
      Backpatch to 9.1, where the error was introduced.
      07353de4
  11. Jun 20, 2014
    • Joe Conway's avatar
      Clean up data conversion short-lived memory context. · 3e2cfa42
      Joe Conway authored
      dblink uses a short-lived data conversion memory context. However it
      was not deleted when no longer needed, leading to a noticeable memory
      leak under some circumstances. Plug the hole, along with minor
      refactoring. Backpatch to 9.2 where the leak was introduced.
      
      Report and initial patch by MauMau. Reviewed/modified slightly by
      Tom Lane and me.
      3e2cfa42
    • Tom Lane's avatar
      Avoid leaking memory while evaluating arguments for a table function. · b568d383
      Tom Lane authored
      ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM
      in the query-lifespan memory context.  This is insignificant in simple
      cases where the function relation is scanned only once; but if the function
      is in a sub-SELECT or is on the inside of a nested loop, any memory
      consumed during argument evaluation can add up quickly.  (The potential for
      trouble here had been foreseen long ago, per existing comments; but we'd
      not previously seen a complaint from the field about it.)  To fix, create
      an additional temporary context just for this purpose.
      
      Per an example from MauMau.  Back-patch to all active branches.
      b568d383
  12. Jun 14, 2014
    • Noah Misch's avatar
      Make pqsignal() available to pg_regress of ECPG and isolation suites. · 0ae841a9
      Noah Misch authored
      Commit 453a5d91 made it available to the
      src/test/regress build of pg_regress, but all pg_regress builds need the
      same treatment.  Patch 9.2 through 8.4; in 9.3 and later, pg_regress
      gets pqsignal() via libpgport.
      0ae841a9
    • Noah Misch's avatar
      Secure Unix-domain sockets of "make check" temporary clusters. · 453a5d91
      Noah Misch authored
      Any OS user able to access the socket can connect as the bootstrap
      superuser and proceed to execute arbitrary code as the OS user running
      the test.  Protect against that by placing the socket in a temporary,
      mode-0700 subdirectory of /tmp.  The pg_regress-based test suites and
      the pg_upgrade test suite were vulnerable; the $(prove_check)-based test
      suites were already secure.  Back-patch to 8.4 (all supported versions).
      The hazard remains wherever the temporary cluster accepts TCP
      connections, notably on Windows.
      
      As a convenient side effect, this lets testing proceed smoothly in
      builds that override DEFAULT_PGSOCKET_DIR.  Popular non-default values
      like /var/run/postgresql are often unwritable to the build user.
      
      Security: CVE-2014-0067
      453a5d91
    • Noah Misch's avatar
      Add mkdtemp() to libpgport. · a919937f
      Noah Misch authored
      This function is pervasive on free software operating systems; import
      NetBSD's implementation.  Back-patch to 8.4, like the commit that will
      harness it.
      a919937f
  13. Jun 13, 2014
    • Tom Lane's avatar
      Fix pg_restore's processing of old-style BLOB COMMENTS data. · ce7fc4fb
      Tom Lane authored
      Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch
      of COMMENT commands into a single BLOB COMMENTS archive object.  With
      sufficiently many such comments, some of the commands would likely get
      split across bufferloads when restoring, causing failures in
      direct-to-database restores (though no problem would be evident in text
      output).  This is the same type of issue we have with table data dumped as
      INSERT commands, and it can be fixed in the same way, by using a mini SQL
      lexer to figure out where the command boundaries are.  Fortunately, the
      COMMENT commands are no more complex to lex than INSERTs, so we can just
      re-use the existing lexer for INSERTs.
      
      Per bug #10611 from Jacek Zalewski.  Back-patch to all active branches.
      ce7fc4fb
  14. Jun 12, 2014
  15. Jun 11, 2014
    • Tom Lane's avatar
      Fix ancient encoding error in hungarian.stop. · 80232353
      Tom Lane authored
      When we grabbed this file off the Snowball project's website, we mistakenly
      supposed that it was in LATIN1 encoding, but evidently it was actually in
      LATIN2.  This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in
      LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in
      bug #10589 from Zoltán Sörös.  We'd have messed up u-double-acute too,
      but there aren't any of those in the file.  Other characters used in the
      file have the same codes in LATIN1 and LATIN2, which no doubt helped hide
      the problem for so long.
      
      The error is not only ours: the Snowball project also was confused about
      which encoding is required for Hungarian.  But dealing with that will
      require source-code changes that I'm not at all sure we'll wish to
      back-patch.  Fixing the stopword file seems reasonably safe to back-patch
      however.
      80232353
  16. Jun 10, 2014
    • Tom Lane's avatar
      Fix planner bug with nested PlaceHolderVars in 9.2 (only). · 187ae173
      Tom Lane authored
      Commit 9e7e29c7 fixed some problems with
      LATERAL references in PlaceHolderVars, one of which was that "createplan.c
      wasn't handling nested PlaceHolderVars properly".  I failed to see that
      this problem might occur in older versions as well; but it can, as
      demonstrated in bug #10587 from Geoff Speicher.  In this case the nesting
      occurs due to push-down of PlaceHolderVar expressions into a parameterized
      path.  So, back-patch the relevant changes from 9e7e29c7 into 9.2 where
      parameterized paths were introduced.  (Perhaps I'm still being too myopic,
      but I'm hesitant to change older branches without some evidence that the
      case can occur there.)
      187ae173
  17. Jun 09, 2014
    • Tom Lane's avatar
      Fix infinite loop when splitting inner tuples in SPGiST text indexes. · 93328b2d
      Tom Lane authored
      Previously, the code used a node label of zero both for strings that
      contain no bytes beyond the inner tuple's prefix, and for cases where an
      "allTheSame" inner tuple has to be split to allow a string with a different
      next byte to be inserted into it.  Failing to distinguish these cases meant
      that if a string ending with the current prefix needed to be inserted into
      an allTheSame tuple, we got into an infinite loop, because after splitting
      the tuple we'd descend into the child allTheSame tuple and then find we
      need to split again.
      
      To fix, instead use -1 and -2 as the node labels for these two cases.
      This requires widening the node label type from "char" to int2, but
      fortunately SPGiST stores all pass-by-value node label types in their
      Datum representation, which means that this change is transparently upward
      compatible so far as the on-disk representation goes.  We continue to
      recognize zero as a dummy node label for reading purposes, but will not
      attempt to push new index entries down into such a label, so that the loop
      won't occur even when dealing with an existing index.
      
      Per report from Teodor Sigaev.  Back-patch to 9.2 where the faulty
      code was introduced.
      93328b2d
  18. Jun 06, 2014
    • Fujii Masao's avatar
      Fix breakages of hot standby regression test. · bdc5400b
      Fujii Masao authored
      This commit changes HS regression test so that it uses
      REPEATABLE READ transaction instead of SERIALIZABLE one
      because SERIALIZABLE transaction isolation level is not
      available in HS. Also this commit fixes VACUUM/ANALYZE
      label mixup.
      
      This was fixed in HEAD (commit 2985e160), but it should
      have been back-patched to 9.1 which had introduced SSI
      and forbidden SERIALIZABLE transaction in HS.
      
      Amit Langote
      bdc5400b
  19. Jun 05, 2014
    • Tom Lane's avatar
      Add defenses against running with a wrong selection of LOBLKSIZE. · 4fb64782
      Tom Lane authored
      It's critical that the backend's idea of LOBLKSIZE match the way data has
      actually been divided up in pg_largeobject.  While we don't provide any
      direct way to adjust that value, doing so is a one-line source code change
      and various people have expressed interest recently in changing it.  So,
      just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in
      pg_control and cross-check that the backend's compiled-in setting matches
      the on-disk data.
      
      Also tweak the code in inv_api.c so that fetches from pg_largeobject
      explicitly verify that the length of the data field is not more than
      LOBLKSIZE.  Formerly we just had Asserts() for that, which is no protection
      at all in production builds.  In some of the call sites an overlength data
      value would translate directly to a security-relevant stack clobber, so it
      seems worth one extra runtime comparison to be sure.
      
      In the back branches, we can't change the contents of pg_control; but we
      can still make the extra checks in inv_api.c, which will offer some amount
      of protection against running with the wrong value of LOBLKSIZE.
      4fb64782
  20. Jun 04, 2014
    • Andres Freund's avatar
      Fix longstanding bug in HeapTupleSatisfiesVacuum(). · 315442c0
      Andres Freund authored
      HeapTupleSatisfiesVacuum() didn't properly discern between
      DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been
      inserted in the current transaction and deleted in a aborted
      subtransaction of the current backend. At the very least that caused
      problems for CLUSTER and CREATE INDEX in transactions that had
      aborting subtransactions producing rows, leading to warnings like:
      WARNING:  concurrent delete in progress within table "..."
      possibly in an endless, uninterruptible, loop.
      
      Instead of treating *InProgress xmins the same as *IsCurrent ones,
      treat them as being distinct like the other visibility routines. As
      implemented this separatation can cause a behaviour change for rows
      that have been inserted and deleted in another, still running,
      transaction. HTSV will now return INSERT_IN_PROGRESS instead of
      DELETE_IN_PROGRESS for those. That's both, more in line with the other
      visibility routines and arguably more correct. The latter because a
      INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of
      xmax.
      The only current caller where that's possibly worse than the old
      behaviour is heap_prune_chain() which now won't mark the page as
      prunable if a row has concurrently been inserted and deleted. That's
      harmless enough.
      
      As a cautionary measure also insert a interrupt check before the gotos
      in IndexBuildHeapScan() that lead to the uninterruptible loop. There
      are other possible causes, like a row that several sessions try to
      update and all fail, for repeated loops and the cost of doing so in
      the retry case is low.
      
      As this bug goes back all the way to the introduction of
      subtransactions in 573a71a5 backpatch to all supported releases.
      
      Reported-By: Sandro Santilli
      315442c0
  21. Jun 03, 2014
    • Tom Lane's avatar
      Make plpython_unicode regression test work in more database encodings. · 658fad7f
      Tom Lane authored
      This test previously used a data value containing U+0080, and would
      therefore fail if the database encoding didn't have an equivalent to
      that; which only about half of our supported server encodings do.
      We could fall back to using some plain-ASCII character, but that seems
      like it's losing most of the point of the test.  Instead switch to using
      U+00A0 (no-break space), which translates into all our supported encodings
      except the four in the EUC_xx family.
      
      Per buildfarm testing.  Back-patch to 9.1, which is as far back as this
      test is expected to succeed everywhere.  (9.0 has the test, but without
      back-patching some 9.1 code changes we could not expect to get consistent
      results across platforms anyway.)
      658fad7f
    • Andres Freund's avatar
      Set the process latch when processing recovery conflict interrupts. · f998e994
      Andres Freund authored
      Because RecoveryConflictInterrupt() didn't set the process latch
      anything using the latter to wait for events didn't get notified about
      recovery conflicts. Most latch users are never the target of recovery
      conflicts, which explains the lack of reports about this until
      now.
      Since 9.3 two possible affected users exist though: The sql callable
      pg_sleep() now uses latches to wait and background workers are
      expected to use latches in their main loop. Both would currently wait
      until the end of WaitLatch's timeout.
      
      Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also
      be possible to fix the issue by having each latch user set
      set_latch_on_sigusr1. That seems failure prone and though, as most of
      these callsites won't often receive recovery conflicts and thus will
      likely only be tested against normal query cancels et al. It'd also be
      unnecessarily verbose.
      
      Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be
      sufficient, because that's where pg_sleep() was converted to waiting
      on the latch and background workers got introduced; but there could be
      user level code making use of the latch pre 9.3.
      f998e994
  22. Jun 01, 2014
  23. May 31, 2014
    • Tom Lane's avatar
      On OS X, link libpython normally, ignoring the "framework" framework. · 83ed4598
      Tom Lane authored
      As of Xcode 5.0, Apple isn't including the Python framework as part of the
      SDK-level files, which means that linking to it might fail depending on
      whether Xcode thinks you've selected a specific SDK version.  According to
      their Tech Note 2328, they've basically deprecated the framework method of
      linking to libpython and are telling people to link to the shared library
      normally.  (I'm pretty sure this is in direct contradiction to the advice
      they were giving a few years ago, but whatever.)  Testing says that this
      approach works fine at least as far back as OS X 10.4.11, so let's just
      rip out the framework special case entirely.  We do still need a special
      case to decide that OS X provides a shared library at all, unfortunately
      (I wonder why the distutils check doesn't work ...).  But this is still
      less of a special case than before, so it's fine.
      
      Back-patch to all supported branches, since we'll doubtless be hearing
      about this more as more people update to recent Xcode.
      83ed4598
  24. May 29, 2014
    • Tom Lane's avatar
      When using the OSSP UUID library, cache its uuid_t state object. · 2fb9fb66
      Tom Lane authored
      The original coding in contrib/uuid-ossp created and destroyed a uuid_t
      object (or, in some cases, even two of them) each time it was called.
      This is not the intended usage: you're supposed to keep the uuid_t object
      around so that the library can cache its state across uses.  (Other UUID
      libraries seem to keep equivalent state behind-the-scenes in static
      variables, but OSSP chose differently.)  Aside from being quite inefficient,
      creating a new uuid_t loses knowledge of the previously generated UUID,
      which in theory could result in duplicate V1-style UUIDs being created
      on sufficiently fast machines.
      
      On at least some platforms, creating a new uuid_t also draws some entropy
      from /dev/urandom, leaving less for the rest of the system.  This seems
      sufficiently unpleasant to justify back-patching this change.
      2fb9fb66
    • Tom Lane's avatar
      Revert "Fix bogus %name-prefix option syntax in all our Bison files." · 952b0360
      Tom Lane authored
      This reverts commit 867363cb.
      
      It turns out that the %name-prefix syntax without "=" does not work
      at all in pre-2.4 Bison.  We are not prepared to make such a large
      jump in minimum required Bison version just to suppress a warning
      message in a version hardly any developers are using yet.
      When 3.0 gets more popular, we'll figure out a way to deal with this.
      In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for
      anyone using 3.0 who doesn't want to see the warning.
      952b0360
  25. May 28, 2014
    • Tom Lane's avatar
      Fix bogus %name-prefix option syntax in all our Bison files. · 867363cb
      Tom Lane authored
      %name-prefix doesn't use an "=" sign according to the Bison docs, but it
      silently accepted one anyway, until Bison 3.0.  This was originally a
      typo of mine in commit 012abeba, and we
      seem to have slavishly copied the error into all the other grammar files.
      
      Per report from Vik Fearing; analysis by Peter Eisentraut.
      
      Back-patch to all active branches, since somebody might try to build
      a back branch with up-to-date tools.
      867363cb
    • Magnus Hagander's avatar
      Ensure cleanup in case of early errors in streaming base backups · dbcde0f4
      Magnus Hagander authored
      Move the code that sends the initial status information as well as the
      calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code
      failed, we would "leak" a counter of number of concurrent backups, thereby
      making the system always believe it was in backup mode. This could happen
      if the sending failed (which it probably never did given that the small
      amount of data to send would never cause a flush). It is very low risk, but
      all operations after do_pg_start_backup should be protected.
      dbcde0f4
  26. May 27, 2014
    • Tom Lane's avatar
      Avoid unportable usage of sscanf(UINT64_FORMAT). · 9a21ac08
      Tom Lane authored
      On Mingw, it seems that scanf() doesn't necessarily accept the same format
      codes that printf() does, and in particular it may fail to recognize %llu
      even though printf() does.  Since configure only probes printf() behavior
      while setting up the INT64_FORMAT macros, this means it's unsafe to use
      those macros with scanf().  We had only one instance of such a coding
      pattern, in contrib/pg_stat_statements, so change that code to avoid
      the problem.
      
      Per buildfarm warnings.  Back-patch to 9.0 where the troublesome code
      was introduced.
      
      Michael Paquier
      9a21ac08
  27. May 20, 2014
    • Tom Lane's avatar
      Prevent auto_explain from changing the output of a user's EXPLAIN. · 31f579f0
      Tom Lane authored
      Commit af7914c6, which introduced the
      EXPLAIN (TIMING) option, for some reason coded explain.c to look at
      planstate->instrument->need_timer rather than es->timing to decide
      whether to print timing info.  However, the former flag might get set
      as a result of contrib/auto_explain wanting timing information.  We
      certainly don't want activation of auto_explain to change user-visible
      statement behavior, so fix that.
      
      Also fix an independent bug introduced in the same patch: in the code
      path for a never-executed node with a machine-friendly output format,
      if timing was selected, it would fail to print the Actual Rows and Actual
      Loops items.
      
      Per bug #10404 from Tomonari Katsumata.  Back-patch to 9.2 where the
      faulty code was introduced.
      31f579f0
  28. May 19, 2014
    • Heikki Linnakangas's avatar
      Use 0-based numbering in comments about backup blocks. · 0128a771
      Heikki Linnakangas authored
      The macros and functions that work with backup blocks in the redo function
      use 0-based numbering, so let's use that consistently in the function that
      generates the records too. Makes it so much easier to compare the
      generation and replay functions.
      
      Backpatch to 9.0, where we switched from 1-based to 0-based numbering.
      0128a771
  29. May 16, 2014
    • Heikki Linnakangas's avatar
      Initialize tsId and dbId fields in WAL record of COMMIT PREPARED. · 0d4c75f4
      Heikki Linnakangas authored
      Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed
      that prepared transaction commits reuse that struct. Fix that.
      
      Because those fields were left unitialized, replaying a commit prepared WAL
      record in a hot standby node would fail to remove the relcache init file.
      That can lead to "could not open file" errors on the standby. Relcache init
      file only needs to be removed when a system table/index is rewritten in the
      transaction using two phase commit, so that should be rare in practice. In
      HEAD, the incorrect dbId/tsId values are also used for filtering in logical
      replication code, causing the transaction to always be filtered out.
      
      Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was
      introduced.
      0d4c75f4
  30. May 15, 2014
    • Tom Lane's avatar
      Fix unportable setvbuf() usage in initdb. · 9601cb7b
      Tom Lane authored
      In yesterday's commit 2dc4f011, I tried
      to force buffering of stdout/stderr in initdb to be what it is by
      default when the program is run interactively on Unix (since that's how
      most manual testing is done).  This tripped over the fact that Windows
      doesn't support _IOLBF mode.  We dealt with that a long time ago in
      syslogger.c by falling back to unbuffered mode on Windows.  Export that
      solution in port.h and use it in initdb.
      
      Back-patch to 8.4, like the previous commit.
      9601cb7b
    • Heikki Linnakangas's avatar
      Handle duplicate XIDs in txid_snapshot. · 479a36f2
      Heikki Linnakangas authored
      The proc array can contain duplicate XIDs, when a transaction is just being
      prepared for two-phase commit. To cope, remove any duplicates in
      txid_current_snapshot(). Also ignore duplicates in the input functions, so
      that if e.g. you have an old pg_dump file that already contains duplicates,
      it will be accepted.
      
      Report and fix by Jan Wieck. Backpatch to all supported versions.
      479a36f2
Loading