Skip to content
Snippets Groups Projects
  1. Apr 01, 2015
  2. Mar 31, 2015
  3. Mar 30, 2015
    • Andrew Dunstan's avatar
      Run pg_upgrade and pg_resetxlog with restricted token on Windows · 94856631
      Andrew Dunstan authored
      As with initdb these programs need to run with a restricted token, and
      if they don't pg_upgrade will fail when run as a user with Adminstrator
      privileges.
      
      Backpatch to all live branches. On the development branch the code is
      reorganized so that the restricted token code is now in a single
      location. On the stable bramches a less invasive change is made by
      simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.
      
      Patches and bug report from Muhammad Asif Naeem, reviewed by Michael
      Paquier, slightly edited by me.
      94856631
    • Tom Lane's avatar
      Fix bogus concurrent use of _hash_getnewbuf() in bucket split code. · f155466f
      Tom Lane authored
      _hash_splitbucket() obtained the base page of the new bucket by calling
      _hash_getnewbuf(), but it held no exclusive lock that would prevent some
      other process from calling _hash_getnewbuf() at the same time.  This is
      contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.
      In practice, we must only call that function while holding write lock on
      the hash index's metapage.
      
      An additional problem was that we'd already modified the metapage's bucket
      mapping data, meaning that failure to extend the index would leave us with
      a corrupt index.
      
      Fix both issues by moving the _hash_getnewbuf() call to just before we
      modify the metapage in _hash_expandtable().
      
      Unfortunately there's still a large problem here, which is that we could
      also incur ENOSPC while trying to get an overflow page for the new bucket.
      That would leave the index corrupt in a more subtle way, namely that some
      index tuples that should be in the new bucket might still be in the old
      one.  Fixing that seems substantially more difficult; even preallocating as
      many pages as we could possibly need wouldn't entirely guarantee that the
      bucket split would complete successfully.  So for today let's just deal
      with the base case.
      
      Per report from Antonin Houska.  Back-patch to all active branches.
      f155466f
  4. Mar 29, 2015
    • Tom Lane's avatar
      Add vacuum_delay_point call in compute_index_stats's per-sample-row loop. · d12afe11
      Tom Lane authored
      Slow functions in index expressions might cause this loop to take long
      enough to make it worth being cancellable.  Probably it would be enough
      to call CHECK_FOR_INTERRUPTS here, but for consistency with other
      per-sample-row loops in this file, let's use vacuum_delay_point.
      
      Report and patch by Jeff Janes.  Back-patch to all supported branches.
      d12afe11
  5. Mar 26, 2015
  6. Mar 24, 2015
    • Tom Lane's avatar
      Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation. · 3fbfd5db
      Tom Lane authored
      ExecOpenScanRelation assumed that any relation listed in the ExecRowMark
      list has been locked by InitPlan; but this is not true if the rel's
      markType is ROW_MARK_COPY, which is possible if it's a foreign table.
      
      In most (possibly all) cases, failure to acquire a lock here isn't really
      problematic because the parser, planner, or plancache would have taken the
      appropriate lock already.  In principle though it might leave us vulnerable
      to working with a relation that we hold no lock on, and in any case if the
      executor isn't depending on previously-taken locks otherwise then it should
      not do so for ROW_MARK_COPY relations.
      
      Noted by Etsuro Fujita.  Back-patch to all active versions, since the
      inconsistency has been there a long time.  (It's almost certainly
      irrelevant in 9.0, since that predates foreign tables, but the code's
      still wrong on its own terms.)
      3fbfd5db
  7. Mar 16, 2015
    • Tom Lane's avatar
      Replace insertion sort in contrib/intarray with qsort(). · 8582ae7a
      Tom Lane authored
      It's all very well to claim that a simplistic sort is fast in easy
      cases, but O(N^2) in the worst case is not good ... especially if the
      worst case is as easy to hit as "descending order input".  Replace that
      bit with our standard qsort.
      
      Per bug #12866 from Maksym Boguk.  Back-patch to all active branches.
      8582ae7a
  8. Mar 14, 2015
    • Tom Lane's avatar
      Remove workaround for ancient incompatibility between readline and libedit. · 309ff2ad
      Tom Lane authored
      GNU readline defines the return value of write_history() as "zero if OK,
      else an errno code".  libedit's version of that function used to have a
      different definition (to wit, "-1 if error, else the number of lines
      written to the file").  We tried to work around that by checking whether
      errno had become nonzero, but this method has never been kosher according
      to the published API of either library.  It's reportedly completely broken
      in recent Ubuntu releases: psql bleats about "No such file or directory"
      when saving ~/.psql_history, even though the write worked fine.
      
      However, libedit has been following the readline definition since somewhere
      around 2006, so it seems all right to finally break compatibility with
      ancient libedit releases and trust that the return value is what readline
      specifies.  (I'm not sure when the various Linux distributions incorporated
      this fix, but I did find that OS X has been shipping fixed versions since
      10.5/Leopard.)
      
      If anyone is still using such an ancient libedit, they will find that psql
      complains it can't write ~/.psql_history at exit, even when the file was
      written correctly.  This is no worse than the behavior we're fixing for
      current releases.
      
      Back-patch to all supported branches.
      309ff2ad
    • Tatsuo Ishii's avatar
      Fix integer overflow in debug message of walreceiver · 4909cb59
      Tatsuo Ishii authored
      The message tries to tell the replication apply delay which fails if
      the first WAL record is not applied yet. Fix is, instead of telling
      overflowed minus numeric, showing "N/A" which indicates that the delay
      data is not yet available. Problem reported by me and patch by
      Fabrízio de Royes Mello.
      
      Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not
      have the debug message).
      4909cb59
  9. Mar 12, 2015
    • Tom Lane's avatar
      Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples. · 590fc5d9
      Tom Lane authored
      The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting
      tableoid to InvalidOid, I think on the assumption that the referenced
      RTE must be a subquery or other case without a meaningful OID.  However,
      foreign tables also use this code path, and they do have meaningful
      table OIDs; so failure to set the tuple field can lead to user-visible
      misbehavior.  Fix that by fetching the appropriate OID from the range
      table.
      
      There's still an issue about whether CTID can ever have a meaningful
      value in this case; at least with postgres_fdw foreign tables, it does.
      But that is a different problem that seems to require a significantly
      different patch --- it's debatable whether postgres_fdw really wants to
      use this code path at all.
      
      Simplified version of a patch by Etsuro Fujita, who also noted the
      problem to begin with.  The issue can be demonstrated in all versions
      having FDWs, so back-patch to 9.1.
      590fc5d9
  10. Mar 08, 2015
    • Tom Lane's avatar
      Fix documentation for libpq's PQfn(). · ae67e81e
      Tom Lane authored
      The SGML docs claimed that 1-byte integers could be sent or received with
      the "isint" options, but no such behavior has ever been implemented in
      pqGetInt() or pqPutInt().  The in-code documentation header for PQfn() was
      even less in tune with reality, and the code itself used parameter names
      matching neither the SGML docs nor its libpq-fe.h declaration.  Do a bit
      of additional wordsmithing on the SGML docs while at it.
      
      Since the business about 1-byte integers is a clear documentation bug,
      back-patch to all supported branches.
      ae67e81e
  11. Mar 06, 2015
  12. Mar 05, 2015
    • Alvaro Herrera's avatar
      Fix user mapping object description · e166e644
      Alvaro Herrera authored
      We were using "user mapping for user XYZ" as description for user mappings, but
      that's ambiguous because users can have mappings on multiple foreign
      servers; therefore change it to "for user XYZ on server UVW" instead.
      Object identities for user mappings are also updated in the same way, in
      branches 9.3 and above.
      
      The incomplete description string was introduced together with the whole
      SQL/MED infrastructure by commit cae565e5 of 8.4 era, so backpatch all
      the way back.
      e166e644
  13. Mar 02, 2015
    • Stephen Frost's avatar
      Fix pg_dump handling of extension config tables · d13bbfab
      Stephen Frost authored
      Since 9.1, we've provided extensions with a way to denote
      "configuration" tables- tables created by an extension which the user
      may modify.  By marking these as "configuration" tables, the extension
      is asking for the data in these tables to be pg_dump'd (tables which
      are not marked in this way are assumed to be entirely handled during
      CREATE EXTENSION and are not included at all in a pg_dump).
      
      Unfortunately, pg_dump neglected to consider foreign key relationships
      between extension configuration tables and therefore could end up
      trying to reload the data in an order which would cause FK violations.
      
      This patch teaches pg_dump about these dependencies, so that the data
      dumped out is done so in the best order possible.  Note that there's no
      way to handle circular dependencies, but those have yet to be seen in
      the wild.
      
      The release notes for this should include a caution to users that
      existing pg_dump-based backups may be invalid due to this issue.  The
      data is all there, but restoring from it will require extracting the
      data for the configuration tables and then loading them in the correct
      order by hand.
      
      Discussed initially back in bug #6738, more recently brought up by
      Gilles Darold, who provided an initial patch which was further reworked
      by Michael Paquier.  Further modifications and documentation updates
      by me.
      
      Back-patch to 9.1 where we added the concept of extension configuration
      tables.
      d13bbfab
  14. Mar 01, 2015
    • Noah Misch's avatar
      Unlink static libraries before rebuilding them. · c3b0baf9
      Noah Misch authored
      When the library already exists in the build directory, "ar" preserves
      members not named on its command line.  This mattered when, for example,
      a "configure" rerun dropped a file from $(LIBOBJS).  libpgport carried
      the obsolete member until "make clean".  Back-patch to 9.0 (all
      supported versions).
      c3b0baf9
  15. Feb 28, 2015
    • Tom Lane's avatar
      Fix planning of star-schema-style queries. · 6f419958
      Tom Lane authored
      Part of the intent of the parameterized-path mechanism was to handle
      star-schema queries efficiently, but some overly-restrictive search
      limiting logic added in commit e2fa76d8
      prevented such cases from working as desired.  Fix that and add a
      regression test about it.  Per gripe from Marc Cousin.
      
      This is arguably a bug rather than a new feature, so back-patch to 9.2
      where parameterized paths were introduced.
      6f419958
  16. Feb 26, 2015
    • Andres Freund's avatar
      Reconsider when to wait for WAL flushes/syncrep during commit. · d6707652
      Andres Freund authored
      Up to now RecordTransactionCommit() waited for WAL to be flushed (if
      synchronous_commit != off) and to be synchronously replicated (if
      enabled), even if a transaction did not have a xid assigned. The primary
      reason for that is that sequence's nextval() did not assign a xid, but
      are worthwhile to wait for on commit.
      
      This can be problematic because sometimes read only transactions do
      write WAL, e.g. HOT page prune records. That then could lead to read only
      transactions having to wait during commit. Not something people expect
      in a read only transaction.
      
      This lead to such strange symptoms as backends being seemingly stuck
      during connection establishment when all synchronous replicas are
      down. Especially annoying when said stuck connection is the standby
      trying to reconnect to allow syncrep again...
      
      This behavior also is involved in a rather complicated <= 9.4 bug where
      the transaction started by catchup interrupt processing waited for
      syncrep using latches, but didn't get the wakeup because it was already
      running inside the same overloaded signal handler. Fix the issue here
      doesn't properly solve that issue, merely papers over the problems. In
      9.5 catchup interrupts aren't processed out of signal handlers anymore.
      
      To fix all this, make nextval() acquire a top level xid, and only wait for
      transaction commit if a transaction both acquired a xid and emitted WAL
      records.  If only a xid has been assigned we don't uselessly want to
      wait just because of writes to temporary/unlogged tables; if only WAL
      has been written we don't want to wait just because of HOT prunes.
      
      The xid assignment in nextval() is unlikely to cause overhead in
      real-world workloads. For one it only happens SEQ_LOG_VALS/32 values
      anyway, for another only usage of nextval() without using the result in
      an insert or similar is affected.
      
      Discussion: 20150223165359.GF30784@awork2.anarazel.de,
          369698E947874884A77849D8FE3680C2@maumau,
          5CF4ABBA67674088B3941894E22A0D25@maumau
      
      Per complaint from maumau and Thom Brown
      
      Backpatch all the way back; 9.0 doesn't have syncrep, but it seems
      better to be consistent behavior across all maintained branches.
      d6707652
    • Noah Misch's avatar
      Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables. · d7083cc5
      Noah Misch authored
      "RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
      memory.  Back-patch to 9.0 (all supported versions).  Little code ran
      between the premature free and the read, so non-assert builds are
      unlikely to witness user-visible consequences.
      d7083cc5
  17. Feb 25, 2015
    • Tom Lane's avatar
      Fix dumping of views that are just VALUES(...) but have column aliases. · be8801e9
      Tom Lane authored
      The "simple" path for printing VALUES clauses doesn't work if we need
      to attach nondefault column aliases, because there's noplace to do that
      in the minimal VALUES() syntax.  So modify get_simple_values_rte() to
      detect nondefault aliases and treat that as a non-simple case.  This
      further exposes that the "non-simple" path never actually worked;
      it didn't produce valid syntax.  Fix that too.  Per bug #12789 from
      Curtis McEnroe, and analysis by Andrew Gierth.
      
      Back-patch to all supported branches.  Before 9.3, this also requires
      back-patching the part of commit 092d7ded
      that created get_simple_values_rte() to begin with; inserting the extra
      test into the old factorization of that logic would've been too messy.
      be8801e9
  18. Feb 23, 2015
    • Andres Freund's avatar
      Guard against spurious signals in LockBufferForCleanup. · c76e6dd7
      Andres Freund authored
      When LockBufferForCleanup() has to wait for getting a cleanup lock on a
      buffer it does so by setting a flag in the buffer header and then wait
      for other backends to signal it using ProcWaitForSignal().
      Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can
      return for other reasons than the signal it is hoping for. If such a
      spurious signal arrives the wait flags on the buffer header will still
      be set. That then triggers "ERROR: multiple backends attempting to wait
      for pincount 1".
      
      The fix is simple, unset the flag if still set when retrying. That
      implies an additional spinlock acquisition/release, but that's unlikely
      to matter given the cost of waiting for a cleanup lock.  Alternatively
      it'd have been possible to move responsibility for maintaining the
      relevant flag to the waiter all together, but that might have had
      negative consequences due to possible floods of signals. Besides being
      more invasive.
      
      This looks to be a very longstanding bug. The relevant code in
      LockBufferForCleanup() hasn't changed materially since its introduction
      and ProcWaitForSignal() was documented to return for unrelated reasons
      since 8.2.  The master only patch series removing ImmediateInterruptOK
      made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal
      now uses a latch shared with other tasks.
      
      Per discussion with Kevin Grittner, Tom Lane and me.
      
      Backpatch to all supported branches.
      
      Discussion: 11553.1423805224@sss.pgh.pa.us
      c76e6dd7
    • Heikki Linnakangas's avatar
      Fix potential deadlock with libpq non-blocking mode. · 22c9c8a7
      Heikki Linnakangas authored
      If libpq output buffer is full, pqSendSome() function tries to drain any
      incoming data. This avoids deadlock, if the server e.g. sends a lot of
      NOTICE messages, and blocks until we read them. However, pqSendSome() only
      did that in blocking mode. In non-blocking mode, the deadlock could still
      happen.
      
      To fix, take a two-pronged approach:
      
      1. Change the documentation to instruct that when PQflush() returns 1, you
      should wait for both read- and write-ready, and call PQconsumeInput() if it
      becomes read-ready. That fixes the deadlock, but applications are not going
      to change overnight.
      
      2. In pqSendSome(), drain the input buffer before returning 1. This
      alleviates the problem for applications that only wait for write-ready. In
      particular, a slow but steady stream of NOTICE messages during COPY FROM
      STDIN will no longer cause a deadlock. The risk remains that the server
      attempts to send a large burst of data and fills its output buffer, and at
      the same time the client also sends enough data to fill its output buffer.
      The application will deadlock if it goes to sleep, waiting for the socket
      to become write-ready, before the server's data arrives. In practice,
      NOTICE messages and such that the server might be sending are usually
      short, so it's highly unlikely that the server would fill its output buffer
      so quickly.
      
      Backpatch to all supported versions.
      22c9c8a7
  19. Feb 21, 2015
    • Tom Lane's avatar
      Fix misparsing of empty value in conninfo_uri_parse_params(). · 83c3115d
      Tom Lane authored
      After finding an "=" character, the pointer was advanced twice when it
      should only advance once.  This is harmless as long as the value after "="
      has at least one character; but if it doesn't, we'd miss the terminator
      character and include too much in the value.
      
      In principle this could lead to reading off the end of memory.  It does not
      seem worth treating as a security issue though, because it would happen on
      client side, and besides client logic that's taking conninfo strings from
      untrusted sources has much worse security problems than this.
      
      Report and patch received off-list from Thomas Fanghaenel.
      Back-patch to 9.2 where the faulty code was introduced.
      83c3115d
  20. Feb 18, 2015
    • Tom Lane's avatar
      Fix failure to honor -Z compression level option in pg_dump -Fd. · c86f8f36
      Tom Lane authored
      cfopen() and cfopen_write() failed to pass the compression level through
      to zlib, so that you always got the default compression level if you got
      any at all.
      
      In passing, also fix these and related functions so that the correct errno
      is reliably returned on failure; the original coding supposes that free()
      cannot change errno, which is untrue on at least some platforms.
      
      Per bug #12779 from Christoph Berg.  Back-patch to 9.1 where the faulty
      code was introduced.
      
      Michael Paquier
      c86f8f36
  21. Feb 17, 2015
    • Tom Lane's avatar
      Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses. · d068609b
      Tom Lane authored
      In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only
      looked back as far as commit f3aec2c7 where the breakage occurred
      (which is why I thought the IPv4-in-IPv6 business was undocumented).  But
      actually the logic dates back to commit 3c9bb888 and was simply
      broken by erroneous refactoring in the later commit.  A bit of archives
      excavation shows that we added the whole business in response to a report
      that some 2003-era Linux kernels would report IPv4 connections as having
      IPv4-in-IPv6 addresses.  The fact that we've had no complaints since 9.0
      seems to be sufficient confirmation that no modern kernels do that, so
      let's just rip it all out rather than trying to fix it.
      
      Do this in the back branches too, thus essentially deciding that our
      effective behavior since 9.0 is correct.  If there are any platforms on
      which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix
      would have made for a subtle and potentially security-sensitive change in
      the effective meaning of IPv4 pg_hba.conf entries, which does not seem like
      a good thing to do in minor releases.  So let's let the post-9.0 behavior
      stand, and change the documentation to match it.
      
      In passing, I failed to resist the temptation to wordsmith the description
      of pg_hba.conf IPv4 and IPv6 address entries a bit.  A lot of this text
      hasn't been touched since we were IPv4-only.
      d068609b
    • Robert Haas's avatar
      Improve pg_check_dir's handling of closedir() failures. · 319406c2
      Robert Haas authored
      Avoid losing errno if readdir() fails and closedir() works.  This also
      avoids leaking the directory handle when readdir() fails.  Commit
      6f03927f introduced logic to better
      handle readdir() and closedir() failures, bu it missed these cases.
      
      Extracted from a larger patch by Marco Nenciarini.
      319406c2
    • Andres Freund's avatar
      Fix wrong merge resolution making pg_receivexlog fail in 9.2. · 6b700301
      Andres Freund authored
      I bungled resolving a conflict while backpatching 2c0a4858 to 9.2, by
      passing mark_done = true to ReceiveXlogStream in pg_receivexlog.c (all
      the other branches are ok). Since pg_receivexlog doesn't use a archive
      directory that causes 'could not create archive status file "...": No
      such file or directory' errors.
      
      Until 9.2.11 is released this can be worked around by creating
      'archive_directory' in pg_receivexlog's target directory.
      
      Found by Sergey Konoplev.
      6b700301
  22. Feb 16, 2015
    • Tom Lane's avatar
      Fix misuse of memcpy() in check_ip(). · 3913b897
      Tom Lane authored
      The previous coding copied garbage into a local variable, pretty much
      ensuring that the intended test of an IPv6 connection address against a
      promoted IPv4 address from pg_hba.conf would never match.  The lack of
      field complaints likely indicates that nobody realized this was supposed
      to work, which is unsurprising considering that no user-facing docs suggest
      it should work.
      
      In principle this could have led to a SIGSEGV due to reading off the end of
      memory, but since the source address would have pointed to somewhere in the
      function's stack frame, that's quite unlikely.  What led to discovery of
      the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade,
      which is probably because he is now running a system in which memcpy raises
      abort() upon detecting overlapping source and destination areas.  (You'd
      have to additionally suppose some things about the stack frame layout to
      arrive at this conclusion, but it seems plausible.)
      
      This has been broken since the code was added, in commit f3aec2c7,
      so back-patch to all supported branches.
      3913b897
    • Tom Lane's avatar
      Fix null-pointer-deref crash while doing COPY IN with check constraints. · effcaa4c
      Tom Lane authored
      In commit bf7ca158 I introduced an
      assumption that an RTE referenced by a whole-row Var must have a valid eref
      field.  This is false for RTEs constructed by DoCopy, and there are other
      places taking similar shortcuts.  Perhaps we should make all those places
      go through addRangeTableEntryForRelation or its siblings instead of having
      ad-hoc logic, but the most reliable fix seems to be to make the new code in
      ExecEvalWholeRowVar cope if there's no eref.  We can reasonably assume that
      there's no need to insert column aliases if no aliases were provided.
      
      Add a regression test case covering this, and also verifying that a sane
      column name is in fact available in this situation.
      
      Although the known case only crashes in 9.4 and HEAD, it seems prudent to
      back-patch the code change to 9.2, since all the ingredients for a similar
      failure exist in the variant patch applied to 9.3 and 9.2.
      
      Per report from Jean-Pierre Pelletier.
      effcaa4c
  23. Feb 15, 2015
  24. Feb 13, 2015
  25. Feb 12, 2015
    • Bruce Momjian's avatar
      pg_upgrade: quote directory names in delete_old_cluster script · 66f5217f
      Bruce Momjian authored
      This allows the delete script to properly function when special
      characters appear in directory paths, e.g. spaces.
      
      Backpatch through 9.0
      66f5217f
    • Bruce Momjian's avatar
      pg_upgrade: preserve freeze info for postgres/template1 dbs · d99cf27b
      Bruce Momjian authored
      pg_database.datfrozenxid and pg_database.datminmxid were not preserved
      for the 'postgres' and 'template1' databases.  This could cause missing
      clog file errors on access to user tables and indexes after upgrades in
      these databases.
      
      Backpatch through 9.0
      d99cf27b
    • Tom Lane's avatar
      Fix minor memory leak in ident_inet(). · 0168fb07
      Tom Lane authored
      We'd leak the ident_serv data structure if the second pg_getaddrinfo_all
      (the one for the local address) failed.  This is not of great consequence
      because a failure return here just leads directly to backend exit(), but
      if this function is going to try to clean up after itself at all, it should
      not have such holes in the logic.  Try to fix it in a future-proof way by
      having all the failure exits go through the same cleanup path, rather than
      "optimizing" some of them.
      
      Per Coverity.  Back-patch to 9.2, which is as far back as this patch
      applies cleanly.
      0168fb07
    • Tom Lane's avatar
      Fix more memory leaks in failure path in buildACLCommands. · 8b63f894
      Tom Lane authored
      We already had one go at this issue in commit d73b7f97, but we
      failed to notice that buildACLCommands also leaked several PQExpBuffers
      along with a simply malloc'd string.  This time let's try to make the
      fix a bit more future-proof by eliminating the separate exit path.
      
      It's still not exactly critical because pg_dump will curl up and die on
      failure; but since the amount of the potential leak is now several KB,
      it seems worth back-patching as far as 9.2 where the previous fix landed.
      
      Per Coverity, which evidently is smarter than clang's static analyzer.
      8b63f894
  26. Feb 11, 2015
    • Michael Meskes's avatar
      Fixed array handling in ecpg. · 9be9ac42
      Michael Meskes authored
      When ecpg was rewritten to the new protocol version not all variable types
      were corrected. This patch rewrites the code for these types to fix that. It
      also fixes the documentation to correctly tell the status of array handling.
      9be9ac42
    • Tom Lane's avatar
      Fix pg_dump's heuristic for deciding which casts to dump. · 2593c703
      Tom Lane authored
      Back in 2003 we had a discussion about how to decide which casts to dump.
      At the time pg_dump really only considered an object's containing schema
      to decide what to dump (ie, dump whatever's not in pg_catalog), and so
      we chose a complicated idea involving whether the underlying types were to
      be dumped (cf commit a6790ce8).  But users
      are allowed to create casts between built-in types, and we failed to dump
      such casts.  Let's get rid of that heuristic, which has accreted even more
      ugliness since then, in favor of just looking at the cast's OID to decide
      if it's a built-in cast or not.
      
      In passing, also fix some really ancient code that supposed that it had to
      manufacture a dependency for the cast on its cast function; that's only
      true when dumping from a pre-7.3 server.  This just resulted in some wasted
      cycles and duplicate dependency-list entries with newer servers, but we
      might as well improve it.
      
      Per gripes from a number of people, most recently Greg Sabino Mullane.
      Back-patch to all supported branches.
      2593c703
    • Tom Lane's avatar
      Fix GEQO to not assume its join order heuristic always works. · 0d083103
      Tom Lane authored
      Back in commit 400e2c93 I rewrote GEQO's
      gimme_tree function to improve its heuristic for modifying the given tour
      into a legal join order.  In what can only be called a fit of hubris,
      I supposed that this new heuristic would *always* find a legal join order,
      and ripped out the old logic that allowed gimme_tree to sometimes fail.
      
      The folly of this is exposed by bug #12760, in which the "greedy" clumping
      behavior of merge_clump() can lead it into a dead end which could only be
      recovered from by un-clumping.  We have no code for that and wouldn't know
      exactly what to do with it if we did.  Rather than try to improve the
      heuristic rules still further, let's just recognize that it *is* a
      heuristic and probably must always have failure cases.  So, put back the
      code removed in the previous commit to allow for failure (but comment it
      a bit better this time).
      
      It's possible that this code was actually fully correct at the time and
      has only been broken by the introduction of LATERAL.  But having seen this
      example I no longer have much faith in that proposition, so back-patch to
      all supported branches.
      0d083103
  27. Feb 06, 2015
    • Heikki Linnakangas's avatar
      Report WAL flush, not insert, position in replication IDENTIFY_SYSTEM · 2af568c6
      Heikki Linnakangas authored
      When beginning streaming replication, the client usually issues the
      IDENTIFY_SYSTEM command, which used to return the current WAL insert
      position. That's not suitable for the intended purpose of that field,
      however. pg_receivexlog uses it to start replication from the reported
      point, but if it hasn't been flushed to disk yet, it will fail. Change
      IDENTIFY_SYSTEM to report the flush position instead.
      
      Backpatch to 9.1 and above. 9.0 doesn't report any WAL position.
      2af568c6
  28. Feb 04, 2015
Loading