Skip to content
Snippets Groups Projects
  1. Jun 13, 2012
  2. Jun 10, 2012
  3. Jun 05, 2012
    • Tom Lane's avatar
      Fix some more bugs in contrib/xml2's xslt_process(). · d9b31e48
      Tom Lane authored
      It failed to check for error return from xsltApplyStylesheet(), as reported
      by Peter Gagarinov.  (So far as I can tell, libxslt provides no convenient
      way to get a useful error message in failure cases.  There might be some
      inconvenient way, but considering that this code is deprecated it's hard to
      get enthusiastic about putting lots of work into it.  So I just made it say
      "failed to apply stylesheet", in line with the existing error checks.)
      
      While looking at the code I also noticed that the string returned by
      xsltSaveResultToString was never freed, resulting in a session-lifespan
      memory leak.
      
      Back-patch to all supported versions.
      d9b31e48
  4. Jun 01, 2012
  5. May 30, 2012
    • Tom Lane's avatar
      Fix incorrect password transformation in contrib/pgcrypto's DES crypt(). · 932ded2e
      Tom Lane authored
      Overly tight coding caused the password transformation loop to stop
      examining input once it had processed a byte equal to 0x80.  Thus, if the
      given password string contained such a byte (which is possible though not
      highly likely in UTF8, and perhaps also in other non-ASCII encodings), all
      subsequent characters would not contribute to the hash, making the password
      much weaker than it appears on the surface.
      
      This would only affect cases where applications used DES crypt() to encode
      passwords before storing them in the database.  If a weak password has been
      created in this fashion, the hash will stop matching after this update has
      been applied, so it will be easy to tell if any passwords were unexpectedly
      weak.  Changing to a different password would be a good idea in such a case.
      (Since DES has been considered inadequately secure for some time, changing
      to a different encryption algorithm can also be recommended.)
      
      This code, and the bug, are shared with at least PHP, FreeBSD, and OpenBSD.
      Since the other projects have already published their fixes, there is no
      point in trying to keep this commit private.
      
      This bug has been assigned CVE-2012-2143, and credit for its discovery goes
      to Rubin Xu and Joseph Bonneau.
      932ded2e
  6. May 27, 2012
  7. May 25, 2012
  8. May 24, 2012
  9. May 23, 2012
  10. May 22, 2012
    • Robert Haas's avatar
      Fix error message for COMMENT/SECURITY LABEL ON COLUMN xxx IS 'yyy' · 8fbe5a31
      Robert Haas authored
      When the column name is an unqualified name, rather than table.column,
      the error message complains about too many dotted names, which is
      wrong.  Report by Peter Eisentraut based on examination of the
      sepgsql regression test output, but the problem also affects COMMENT.
      New wording as suggested by Tom Lane.
      8fbe5a31
  11. May 20, 2012
  12. May 17, 2012
  13. May 15, 2012
  14. May 11, 2012
    • Tom Lane's avatar
      Fix contrib/citext's upgrade script to handle array and domain cases. · 63fecc91
      Tom Lane authored
      We previously recognized that citext wouldn't get marked as collatable
      during pg_upgrade from a pre-9.1 installation, and hacked its
      create-from-unpackaged script to manually perform the necessary catalog
      adjustments.  However, we overlooked the fact that domains over citext,
      as well as the citext[] array type, need the same adjustments.  Extend
      the script to handle those cases.
      
      Also, the documentation suggested that this was only an issue in pg_upgrade
      scenarios, which is quite wrong; loading any dump containing citext from a
      pre-9.1 server will also result in the type being wrongly marked.
      
      I approached the documentation problem by changing the 9.1.2 release note
      paragraphs about this issue, which is historically inaccurate.  But it
      seems better than having the information scattered in multiple places, and
      leaving incorrect info in the 9.1.2 notes would be bad anyway.  We'll still
      need to mention the issue again in the 9.1.4 notes, but perhaps they can
      just reference 9.1.2 for fix instructions.
      
      Per report from Evan Carroll.  Back-patch into 9.1.
      63fecc91
  15. May 08, 2012
  16. May 02, 2012
  17. Apr 30, 2012
  18. Apr 28, 2012
    • Tom Lane's avatar
      Adjust timing units in pg_stat_statements. · 93f94e35
      Tom Lane authored
      Display total time and I/O timings in milliseconds, for consistency with
      the units used for timings in the core statistics views.  The columns
      remain of float8 type, so that sub-msec precision is available.  (At some
      point we will probably want to convert the core views to use float8 type
      for the same reason, but this patch does not touch that issue.)
      
      This is a release-note-requiring change in the meaning of the total_time
      column.  The I/O timing columns are new as of 9.2, so there is no
      compatibility impact from redefining them.
      
      Do some minor copy-editing in the documentation, too.
      93f94e35
  19. Apr 24, 2012
  20. Apr 22, 2012
  21. Apr 19, 2012
    • Tom Lane's avatar
      Revise parameterized-path mechanism to fix assorted issues. · 5b7b5518
      Tom Lane authored
      This patch adjusts the treatment of parameterized paths so that all paths
      with the same parameterization (same set of required outer rels) for the
      same relation will have the same rowcount estimate.  We cache the rowcount
      estimates to ensure that property, and hopefully save a few cycles too.
      Doing this makes it practical for add_path_precheck to operate without
      a rowcount estimate: it need only assume that paths with different
      parameterizations never dominate each other, which is close enough to
      true anyway for coarse filtering, because normally a more-parameterized
      path should yield fewer rows thanks to having more join clauses to apply.
      
      In add_path, we do the full nine yards of comparing rowcount estimates
      along with everything else, so that we can discard parameterized paths that
      don't actually have an advantage.  This fixes some issues I'd found with
      add_path rejecting parameterized paths on the grounds that they were more
      expensive than not-parameterized ones, even though they yielded many fewer
      rows and hence would be cheaper once subsequent joining was considered.
      
      To make the same-rowcounts assumption valid, we have to require that any
      parameterized path enforce *all* join clauses that could be obtained from
      the particular set of outer rels, even if not all of them are useful for
      indexing.  This is required at both base scans and joins.  It's a good
      thing anyway since the net impact is that join quals are checked at the
      lowest practical level in the join tree.  Hence, discard the original
      rather ad-hoc mechanism for choosing parameterization joinquals, and build
      a better one that has a more principled rule for when clauses can be moved.
      The original rule was actually buggy anyway for lack of knowledge about
      which relations are part of an outer join's outer side; getting this right
      requires adding an outer_relids field to RestrictInfo.
      5b7b5518
  22. Apr 14, 2012
    • Peter Eisentraut's avatar
      Update contrib/README · 48ea5583
      Peter Eisentraut authored
      Remove lots of outdated information that is duplicated by the
      better-maintained SGML documentation.  In particular, remove the
      outdated listing of contrib modules.  Update the installation
      instructions to mention CREATE EXTENSION, but don't go into too much
      detail.
      48ea5583
  23. Apr 11, 2012
  24. Apr 09, 2012
    • Tom Lane's avatar
      Save a few cycles while creating "sticky" entries in pg_stat_statements. · e969f9a7
      Tom Lane authored
      There's no need to sit there and increment the stats when we know all the
      increments would be zero anyway.  The actual additions might not be very
      expensive, but skipping acquisition of the spinlock seems like a good
      thing.  Pushing the logic about initialization of the usage count down into
      entry_alloc() allows us to do that while making the code actually simpler,
      not more complex.  Expansion on a suggestion by Peter Geoghegan.
      e969f9a7
  25. Apr 08, 2012
    • Tom Lane's avatar
      Improve management of "sticky" entries in contrib/pg_stat_statements. · d5375491
      Tom Lane authored
      This patch addresses a deficiency in the previous pg_stat_statements patch.
      We want to give sticky entries an initial "usage" factor high enough that
      they probably will stick around until their query is completed.  However,
      if the query never completes (eg it gets an error during execution), the
      entry shouldn't persist indefinitely.  Manage this by starting out with
      a usage setting equal to the (approximate) median usage value within the
      whole hashtable, but decaying the value much more aggressively than we
      do for normal entries.
      
      Peter Geoghegan
      d5375491
  26. Apr 06, 2012
    • Tom Lane's avatar
      Dept of second thoughts: improve the API for AnalyzeForeignTable. · cea49fe8
      Tom Lane authored
      If we make the initially-called function return the table physical-size
      estimate, acquire_inherited_sample_rows will be able to use that to
      allocate numbers of samples among child tables, when the day comes that
      we want to support foreign tables in inheritance trees.
      cea49fe8
    • Tom Lane's avatar
      Allow statistics to be collected for foreign tables. · 263d9de6
      Tom Lane authored
      ANALYZE now accepts foreign tables and allows the table's FDW to control
      how the sample rows are collected.  (But only manual ANALYZEs will touch
      foreign tables, for the moment, since among other things it's not very
      clear how to handle remote permissions checks in an auto-analyze.)
      
      contrib/file_fdw is extended to support this.
      
      Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
      263d9de6
  27. Apr 05, 2012
    • Robert Haas's avatar
      Allow pg_archivecleanup to strip optional file extensions. · bbc02243
      Robert Haas authored
      Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself.
      e
      bbc02243
    • Tom Lane's avatar
      Improve efficiency of dblink by using libpq's new row processor API. · 6f922ef8
      Tom Lane authored
      This patch provides a test case for libpq's row processor API.
      contrib/dblink can deal with very large result sets by dumping them into
      a tuplestore (which can spill to disk) --- but until now, the intermediate
      storage of the query result in a PGresult meant memory bloat for any large
      result.  Now we use a row processor to convert the data to tuple form and
      dump it directly into the tuplestore.
      
      A limitation is that this only works for plain dblink() queries, not
      dblink_send_query() followed by dblink_get_result().  In the latter
      case we don't know the desired tuple rowtype soon enough.  While hack
      solutions to that are possible, a different user-level API would
      probably be a better answer.
      
      Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
      6f922ef8
  28. Apr 04, 2012
    • Tom Lane's avatar
      Fix a couple of contrib/dblink bugs. · d843ed21
      Tom Lane authored
      dblink_exec leaked temporary database connections if any error occurred
      after connection setup, for example
      	SELECT dblink_exec('...connect string...', 'select 1/0');
      Add a PG_TRY block to ensure PQfinish gets done when it is needed.
      (dblink_record_internal is on the hairy edge of needing similar treatment,
      but seems not to be actively broken at the moment.)
      
      Also, in 9.0 and up, only one of the three functions using tuplestore
      return mode was properly checking that the query context would allow
      a tuplestore result.
      
      Noted while reviewing dblink patch.  Back-patch to all supported branches.
      d843ed21
  29. Mar 30, 2012
  30. Mar 29, 2012
    • Tom Lane's avatar
      Fix dblink's failure to report correct connection name in error messages. · b75fbe91
      Tom Lane authored
      The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the
      surrounding function's conname variable, causing errors to be incorrectly
      reported as having occurred on the "unnamed" connection in some cases.
      This bug was actually visible in two cases in the regression tests,
      but apparently whoever added those cases wasn't paying attention.
      
      Noted by Kyotaro Horiguchi, though this is different from his proposed
      patch.
      
      Back-patch to 8.4; 8.3 does not have the same type of error reporting
      so the patch is not relevant.
      b75fbe91
    • Tom Lane's avatar
      Improve contrib/pg_stat_statements' handling of PREPARE/EXECUTE statements. · 566a1d43
      Tom Lane authored
      It's actually more useful for the module to ignore these.  Ignoring
      EXECUTE (and not incrementing the nesting level) allows the executor
      hooks to charge the time to the underlying prepared query, which
      shows up as a stats entry with the original PREPARE as query string
      (possibly modified by suppression of constants, which might not be
      terribly useful here but it's not worth avoiding).  This is much more
      useful than cluttering the stats table with a distinct entry for each
      textually distinct EXECUTE.
      
      Experimentation with this idea shows that it's also preferable to ignore
      PREPARE.  If we don't, we get two stats table entries, one with the query
      string hash and one with the jumble-derived hash, but with the same visible
      query string (modulo those constants).  This is confusing and not very
      helpful, since the first entry will only receive costs associated with
      initial planning of the query, which is not something counted at all
      normally by pg_stat_statements.  (And if we do start tracking planning
      costs, we'd want them blamed on the other hash table entry anyway.)
      566a1d43
Loading