Skip to content
Snippets Groups Projects
  1. Mar 28, 2016
  2. Feb 29, 2016
  3. Feb 21, 2016
  4. Feb 18, 2016
    • Tom Lane's avatar
      Fix multiple bugs in contrib/pgstattuple's pgstatindex() function. · 29f29972
      Tom Lane authored
      Dead or half-dead index leaf pages were incorrectly reported as live, as a
      consequence of a code rearrangement I made (during a moment of severe brain
      fade, evidently) in commit d287818e.
      
      The index metapage was not counted in index_size, causing that result to
      not agree with the actual index size on-disk.
      
      Index root pages were not counted in internal_pages, which is inconsistent
      compared to the case of a root that's also a leaf (one-page index), where
      the root would be counted in leaf_pages.  Aside from that inconsistency,
      this could lead to additional transient discrepancies between the reported
      page counts and index_size, since it's possible for pgstatindex's scan to
      see zero or multiple pages marked as BTP_ROOT, if the root moves due to
      a split during the scan.  With these fixes, index_size will always be
      exactly one page more than the sum of the displayed page counts.
      
      Also, the index_size result was incorrectly documented as being measured in
      pages; it's always been measured in bytes.  (While fixing that, I couldn't
      resist doing some small additional wordsmithing on the pgstattuple docs.)
      
      Including the metapage causes the reported index_size to not be zero for
      an empty index.  To preserve the desired property that the pgstattuple
      regression test results are platform-independent (ie, BLCKSZ configuration
      independent), scale the index_size result in the regression tests.
      
      The documentation issue was reported by Otsuka Kenji, and the inconsistent
      root page counting by Peter Geoghegan; the other problems noted by me.
      Back-patch to all supported branches, because this has been broken for
      a long time.
      29f29972
  5. Feb 16, 2016
    • Tom Lane's avatar
      Improve documentation about CREATE INDEX CONCURRENTLY. · 528db6ab
      Tom Lane authored
      Clarify the description of which transactions will block a CREATE INDEX
      CONCURRENTLY command from proceeding, and mention that the index might
      still not be usable after CREATE INDEX completes.  (This happens if the
      index build detected broken HOT chains, so that pg_index.indcheckxmin gets
      set, and there are open old transactions preventing the xmin horizon from
      advancing past the index's initial creation.  I didn't want to explain what
      broken HOT chains are, though, so I omitted an explanation of exactly when
      old transactions prevent the index from being used.)
      
      Per discussion with Chris Travers.  Back-patch to all supported branches,
      since the same text appears in all of them.
      528db6ab
    • Tatsuo Ishii's avatar
      Improve wording in the planner doc · 779c46a3
      Tatsuo Ishii authored
      Change "In this case" to "In the example above" to clarify what it
      actually refers to.
      779c46a3
    • Fujii Masao's avatar
      Correct the formulas for System V IPC parameters SEMMNI and SEMMNS in docs. · 8a96651c
      Fujii Masao authored
      In runtime.sgml, the old formulas for calculating the reasonable
      values of SEMMNI and SEMMNS were incorrect. They have forgotten to
      count the number of semaphores which both the checkpointer process
      (introduced in 9.2) and the background worker processes (introduced
      in 9.3) need.
      
      This commit fixes those formulas so that they count the number of
      semaphores which the checkpointer process and the background worker
      processes need.
      
      Report and patch by Kyotaro Horiguchi. Only the patch for 9.3 was
      modified by me. Back-patch to 9.2 where the checkpointer process was
      added and the number of needed semaphores was increased.
      
      Author: Kyotaro Horiguchi
      Reviewed-by: Fujii Masao
      Backpatch: 9.2
      Discussion: http://www.postgresql.org/message-id/20160203.125119.66820697.horiguchi.kyotaro@lab.ntt.co.jp
      8a96651c
  6. Feb 11, 2016
    • Noah Misch's avatar
      Accept pg_ctl timeout from the PGCTLTIMEOUT environment variable. · 4421b525
      Noah Misch authored
      Many automated test suites call pg_ctl.  Buildfarm members axolotl,
      hornet, mandrill, shearwater, sungazer and tern have failed when server
      shutdown took longer than the pg_ctl default 60s timeout.  This addition
      permits slow hosts to easily raise the timeout without us editing a
      --timeout argument into every test suite pg_ctl call.  Back-patch to 9.1
      (all supported versions) for the sake of automated testing.
      
      Reviewed by Tom Lane.
      4421b525
  7. Feb 08, 2016
  8. Feb 07, 2016
    • Tom Lane's avatar
      Improve documentation about PRIMARY KEY constraints. · ddcc256c
      Tom Lane authored
      Get rid of the false implication that PRIMARY KEY is exactly equivalent to
      UNIQUE + NOT NULL.  That was more-or-less true at one time in our
      implementation, but the standard doesn't say that, and we've grown various
      features (many of them required by spec) that treat a pkey differently from
      less-formal constraints.  Per recent discussion on pgsql-general.
      
      I failed to resist the temptation to do some other wordsmithing in the
      same area.
      ddcc256c
    • Tom Lane's avatar
      3d6c9888
  9. Jan 31, 2016
  10. Jan 02, 2016
  11. Dec 28, 2015
    • Tom Lane's avatar
      Document the exponentiation operator as associating left to right. · 7adbde26
      Tom Lane authored
      Common mathematical convention is that exponentiation associates right to
      left.  We aren't going to change the parser for this, but we could note
      it in the operator's description.  (It's already noted in the operator
      precedence/associativity table, but users might not look there.)
      Per bug #13829 from Henrik Pauli.
      7adbde26
  12. Dec 14, 2015
    • Tom Lane's avatar
      Docs: document that psql's "\i -" means read from stdin. · 6436445e
      Tom Lane authored
      This has worked that way for a long time, maybe always, but you would
      not have known it from the documentation.  Also back-patch the notes
      I added to HEAD earlier today about behavior of the "-f -" switch,
      which likewise have been valid for many releases.
      6436445e
  13. Dec 13, 2015
  14. Dec 04, 2015
    • Tom Lane's avatar
      Further improve documentation of the role-dropping process. · 255cc9b2
      Tom Lane authored
      In commit 1ea0c73c I added a section to user-manag.sgml about how to drop
      roles that own objects; but as pointed out by Stephen Frost, I neglected
      that shared objects (databases or tablespaces) may need special treatment.
      Fix that.  Back-patch to supported versions, like the previous patch.
      255cc9b2
  15. Nov 22, 2015
    • Tom Lane's avatar
      Adopt the GNU convention for handling tar-archive members exceeding 8GB. · b054ca03
      Tom Lane authored
      The POSIX standard for tar headers requires archive member sizes to be
      printed in octal with at most 11 digits, limiting the representable file
      size to 8GB.  However, GNU tar and apparently most other modern tars
      support a convention in which oversized values can be stored in base-256,
      allowing any practical file to be a tar member.  Adopt this convention
      to remove two limitations:
      * pg_dump with -Ft output format failed if the contents of any one table
      exceeded 8GB.
      * pg_basebackup failed if the data directory contained any file exceeding
      8GB.  (This would be a fatal problem for installations configured with a
      table segment size of 8GB or more, and it has also been seen to fail when
      large core dump files exist in the data directory.)
      
      File sizes under 8GB are still printed in octal, so that no compatibility
      issues are created except in cases that would have failed entirely before.
      
      In addition, this patch fixes several bugs in the same area:
      
      * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as
      size_t, which meant that on 32-bit machines it would write a corrupt tar
      header for file sizes between 4GB and 8GB, even though no error was raised.
      This broke both "pg_dump -Ft" and pg_basebackup for such cases.
      
      * pg_restore from a tar archive would fail on tables of size between 4GB
      and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits.
      This happened even with an archive file not affected by the previous bug.
      
      * pg_basebackup would fail if there were files of size between 4GB and 8GB,
      even on 64-bit machines.
      
      * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size,
      on 64-bit big-endian machines.
      
      In view of these potential data-loss bugs, back-patch to all supported
      branches, even though removal of the documented 8GB limit might otherwise
      be considered a new feature rather than a bug fix.
      b054ca03
  16. Nov 10, 2015
    • Tom Lane's avatar
      Improve our workaround for 'TeX capacity exceeded' in building PDF files. · e12a99c8
      Tom Lane authored
      In commit a5ec86a7 I wrote a quick hack
      that reduced the number of TeX string pool entries created while converting
      our documentation to PDF form.  That held the fort for awhile, but as of
      HEAD we're back up against the same limitation.  It turns out that the
      original coding of \FlowObjectSetup actually results in *three* string pool
      entries being generated for every "flow object" (that is, potential
      cross-reference target) in the documentation, and my previous hack only got
      rid of one of them.  With a little more care, we can reduce the string
      count to one per flow object plus one per actually-cross-referenced flow
      object (about 115000 + 5000 as of current HEAD); that should work until
      the documentation volume roughly doubles from where it is today.
      
      As a not-incidental side benefit, this change also causes pdfjadetex to
      stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file.
      It had been making one willy-nilly for every flow object; now it's just one
      per actually-cross-referenced object.  This results in close to a 2X
      savings in PDF file size.  We will still want to run the output through
      "jpdftweak" to get it to be compressed; but we no longer need removal of
      unreferenced bookmarks, so we might be able to find a quicker tool for
      that step.
      
      Although the failure only affects HEAD and US-format output at the moment,
      9.5 cannot be more than a few pages short of failing likewise, so it
      will inevitably fail after a few rounds of minor-version release notes.
      I don't have a lot of faith that we'll never hit the limit in the older
      branches; and anyway it would be nice to get rid of jpdftweak across the
      board.  Therefore, back-patch to all supported branches.
      e12a99c8
  17. Oct 07, 2015
    • Tom Lane's avatar
      Improve documentation of the role-dropping process. · 53951058
      Tom Lane authored
      In general one may have to run both REASSIGN OWNED and DROP OWNED to get
      rid of all the dependencies of a role to be dropped.  This was alluded to
      in the REASSIGN OWNED man page, but not really spelled out in full; and in
      any case the procedure ought to be documented in a more prominent place
      than that.  Add a section to the "Database Roles" chapter explaining this,
      and do a bit of wordsmithing in the relevant commands' man pages.
      53951058
  18. Oct 05, 2015
  19. Oct 02, 2015
    • Tom Lane's avatar
      Docs: add disclaimer about hazards of using regexps from untrusted sources. · 52511fd6
      Tom Lane authored
      It's not terribly hard to devise regular expressions that take large
      amounts of time and/or memory to process.  Recent testing by Greg Stark has
      also shown that machines with small stack limits can be driven to stack
      overflow by suitably crafted regexps.  While we intend to fix these things
      as much as possible, it's probably impossible to eliminate slow-execution
      cases altogether.  In any case we don't want to treat such things as
      security issues.  The history of that code should already discourage
      prudent DBAs from allowing execution of regexp patterns coming from
      possibly-hostile sources, but it seems like a good idea to warn about the
      hazard explicitly.
      
      Currently, similar_escape() allows access to enough of the underlying
      regexp behavior that the warning has to apply to SIMILAR TO as well.
      We might be able to make it safer if we tightened things up to allow only
      SQL-mandated capabilities in SIMILAR TO; but that would be a subtly
      non-backwards-compatible change, so it requires discussion and probably
      could not be back-patched.
      
      Per discussion among pgsql-security list.
      52511fd6
  20. Sep 22, 2015
  21. Sep 16, 2015
    • Tom Lane's avatar
      Fix documentation of regular expression character-entry escapes. · 11103c6d
      Tom Lane authored
      The docs claimed that \uhhhh would be interpreted as a Unicode value
      regardless of the database encoding, but it's never been implemented
      that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely
      the character that pg_mb2wchar translates to 0xhhhh.  Moreover we were
      falsely dismissive of the usefulness of Unicode code points above FFFF.
      Fix that.
      
      It's been like this for ages, so back-patch to all supported branches.
      11103c6d
  22. Sep 07, 2015
  23. Aug 27, 2015
  24. Aug 26, 2015
    • Tom Lane's avatar
      Docs: be explicit about datatype matching for lead/lag functions. · 8cffc4f5
      Tom Lane authored
      The default argument, if given, has to be of exactly the same datatype
      as the first argument; but this was not stated in so many words, and
      the error message you get about it might not lead your thought in the
      right direction.  Per bug #13587 from Robert McGehee.
      
      A quick scan says that these are the only two built-in functions with two
      anyelement arguments and no other polymorphic arguments.  There are plenty
      of cases of, eg, anyarray and anyelement, but those seem less likely to
      confuse.  For instance this doesn't seem terribly hard to figure out:
      "function array_remove(integer[], numeric) does not exist".  So I've
      contented myself with fixing these two cases.
      8cffc4f5
  25. Aug 15, 2015
    • Tom Lane's avatar
      Improve documentation about MVCC-unsafe utility commands. · ed511659
      Tom Lane authored
      The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same
      way as TRUNCATE, because they replace all rows of the table with newly-made
      rows with a new xmin.  (Ideally, concurrent transactions with old snapshots
      would continue to see the old table contents, but the data is not there
      anymore --- and if it were there, it would be inconsistent with the table's
      updated rowtype, so there would be serious implementation problems to fix.)
      This was nowhere documented though, and the problem was only documented for
      TRUNCATE in a note in the TRUNCATE reference page.  Create a new "Caveats"
      section in the MVCC chapter that can be home to this and other limitations
      on serializable consistency.
      
      In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim
      space occupied by a dropped column.  They don't reconstruct existing tuples
      so they couldn't do that.
      
      Back-patch to all supported branches.
      ed511659
  26. Aug 05, 2015
  27. Jul 29, 2015
    • Tom Lane's avatar
      Update our documentation concerning where to create data directories. · 263f2259
      Tom Lane authored
      Although initdb has long discouraged use of a filesystem mount-point
      directory as a PG data directory, this point was covered nowhere in the
      user-facing documentation.  Also, with the popularity of pg_upgrade,
      we really need to recommend that the PG user own not only the data
      directory but its parent directory too.  (Without a writable parent
      directory, operations such as "mv data data.old" fail immediately.
      pg_upgrade itself doesn't do that, but wrapper scripts for it often do.)
      
      Hence, adjust the "Creating a Database Cluster" section to address
      these points.  I also took the liberty of wordsmithing the discussion
      of NFS a bit.
      
      These considerations aren't by any means new, so back-patch to all
      supported branches.
      263f2259
  28. Jul 28, 2015
    • Andres Freund's avatar
      Disable ssl renegotiation by default. · 2f91e7bb
      Andres Freund authored
      While postgres' use of SSL renegotiation is a good idea in theory, it
      turned out to not work well in practice. The specification and openssl's
      implementation of it have lead to several security issues. Postgres' use
      of renegotiation also had its share of bugs.
      
      Additionally OpenSSL has a bunch of bugs around renegotiation, reported
      and open for years, that regularly lead to connections breaking with
      obscure error messages. We tried increasingly complex workarounds to get
      around these bugs, but we didn't find anything complete.
      
      Since these connection breakages often lead to hard to debug problems,
      e.g. spuriously failing base backups and significant latency spikes when
      synchronous replication is used, we have decided to change the default
      setting for ssl renegotiation to 0 (disabled) in the released
      backbranches and remove it entirely in 9.5 and master..
      
      Author: Michael Paquier, with changes by me
      Discussion: 20150624144148.GQ4797@alap3.anarazel.de
      Backpatch: 9.0-9.4; 9.5 and master get a different patch
      2f91e7bb
  29. Jul 10, 2015
    • Tom Lane's avatar
      Improve documentation about array concat operator vs. underlying functions. · 349ce287
      Tom Lane authored
      The documentation implied that there was seldom any reason to use the
      array_append, array_prepend, and array_cat functions directly.  But that's
      not really true, because they can help make it clear which case is meant,
      which the || operator can't do since it's overloaded to represent all three
      cases.  Add some discussion and examples illustrating the potentially
      confusing behavior that can ensue if the parser misinterprets what was
      meant.
      
      Per a complaint from Michael Herold.  Back-patch to 9.2, which is where ||
      started to behave this way.
      349ce287
  30. Jul 09, 2015
    • Heikki Linnakangas's avatar
      Fix another broken link in documentation. · 8dc8a31a
      Heikki Linnakangas authored
      Tom fixed another one of these in commit 7f32dbcd, but there was another
      almost identical one in libpq docs. Per his comment:
      
      HP's web server has apparently become case-sensitive sometime recently.
      Per bug #13479 from Daniel Abraham.  Corrected link identified by Alvaro.
      8dc8a31a
Loading