Skip to content
Snippets Groups Projects
  1. Jul 03, 2017
  2. Jun 30, 2017
  3. Jun 22, 2017
  4. Jun 21, 2017
    • Tom Lane's avatar
      Phase 3 of pgindent updates. · 382ceffd
      Tom Lane authored
      Don't move parenthesized lines to the left, even if that means they
      flow past the right margin.
      
      By default, BSD indent lines up statement continuation lines that are
      within parentheses so that they start just to the right of the preceding
      left parenthesis.  However, traditionally, if that resulted in the
      continuation line extending to the right of the desired right margin,
      then indent would push it left just far enough to not overrun the margin,
      if it could do so without making the continuation line start to the left of
      the current statement indent.  That makes for a weird mix of indentations
      unless one has been completely rigid about never violating the 80-column
      limit.
      
      This behavior has been pretty universally panned by Postgres developers.
      Hence, disable it with indent's new -lpl switch, so that parenthesized
      lines are always lined up with the preceding left paren.
      
      This patch is much less interesting than the first round of indent
      changes, but also bulkier, so I thought it best to separate the effects.
      
      Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
      Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
      382ceffd
    • Tom Lane's avatar
      Phase 2 of pgindent updates. · c7b8998e
      Tom Lane authored
      Change pg_bsd_indent to follow upstream rules for placement of comments
      to the right of code, and remove pgindent hack that caused comments
      following #endif to not obey the general rule.
      
      Commit e3860ffa wasn't actually using
      the published version of pg_bsd_indent, but a hacked-up version that
      tried to minimize the amount of movement of comments to the right of
      code.  The situation of interest is where such a comment has to be
      moved to the right of its default placement at column 33 because there's
      code there.  BSD indent has always moved right in units of tab stops
      in such cases --- but in the previous incarnation, indent was working
      in 8-space tab stops, while now it knows we use 4-space tabs.  So the
      net result is that in about half the cases, such comments are placed
      one tab stop left of before.  This is better all around: it leaves
      more room on the line for comment text, and it means that in such
      cases the comment uniformly starts at the next 4-space tab stop after
      the code, rather than sometimes one and sometimes two tabs after.
      
      Also, ensure that comments following #endif are indented the same
      as comments following other preprocessor commands such as #else.
      That inconsistency turns out to have been self-inflicted damage
      from a poorly-thought-through post-indent "fixup" in pgindent.
      
      This patch is much less interesting than the first round of indent
      changes, but also bulkier, so I thought it best to separate the effects.
      
      Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
      Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
      c7b8998e
    • Tom Lane's avatar
      Initial pgindent run with pg_bsd_indent version 2.0. · e3860ffa
      Tom Lane authored
      The new indent version includes numerous fixes thanks to Piotr Stefaniak.
      The main changes visible in this commit are:
      
      * Nicer formatting of function-pointer declarations.
      * No longer unexpectedly removes spaces in expressions using casts,
        sizeof, or offsetof.
      * No longer wants to add a space in "struct structname *varname", as
        well as some similar cases for const- or volatile-qualified pointers.
      * Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
      * Fixes bug where comments following declarations were sometimes placed
        with no space separating them from the code.
      * Fixes some odd decisions for comments following case labels.
      * Fixes some cases where comments following code were indented to less
        than the expected column 33.
      
      On the less good side, it now tends to put more whitespace around typedef
      names that are not listed in typedefs.list.  This might encourage us to
      put more effort into typedef name collection; it's not really a bug in
      indent itself.
      
      There are more changes coming after this round, having to do with comment
      indentation and alignment of lines appearing within parentheses.  I wanted
      to limit the size of the diffs to something that could be reviewed without
      one's eyes completely glazing over, so it seemed better to split up the
      changes as much as practical.
      
      Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
      Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
      e3860ffa
  5. Jun 15, 2017
    • Tom Lane's avatar
      Fix low-probability leaks of PGresult objects in the backend. · a3bed62d
      Tom Lane authored
      We had three occurrences of essentially the same coding pattern
      wherein we tried to retrieve a query result from a libpq connection
      without blocking.  In the case where PQconsumeInput failed (typically
      indicating a lost connection), all three loops simply gave up and
      returned, forgetting to clear any previously-collected PGresult
      object.  Since those are malloc'd not palloc'd, the oversight results
      in a process-lifespan memory leak.
      
      One instance, in libpqwalreceiver, is of little significance because
      the walreceiver process would just quit anyway if its connection fails.
      But we might as well fix it.
      
      The other two instances, in postgres_fdw, are somewhat more worrisome
      because at least in principle the scenario could be repeated, allowing
      the amount of memory leaked to build up to something worth worrying
      about.  Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
      calls, as well as other calls that could potentially elog(ERROR),
      providing another way to exit without having cleared the PGresult.
      Here we need to add PG_TRY logic similar to what exists in quite a
      few other places in postgres_fdw.
      
      Coverity noted the libpqwalreceiver bug; I found the other two cases
      by checking all calls of PQconsumeInput.
      
      Back-patch to all supported versions as appropriate (9.2 lacks
      postgres_fdw, so this is really quite unexciting for that branch).
      
      Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
      a3bed62d
  6. Jun 13, 2017
  7. Jun 08, 2017
  8. Jun 07, 2017
    • Robert Haas's avatar
      postgres_fdw: Allow cancellation of transaction control commands. · ae9bfc5d
      Robert Haas authored
      Commit f039eaac, later back-patched
      with commit 1b812afb, allowed many of
      the queries issued by postgres_fdw to fetch remote data to respond to
      cancel interrupts in a timely fashion.  However, it didn't do anything
      about the transaction control commands, which remained
      noninterruptible.
      
      Improve the situation by changing do_sql_command() to retrieve query
      results using pgfdw_get_result(), which uses the asynchronous
      interface to libpq so that it can check for interrupts every time
      libpq returns control.  Since this might result in a situation
      where we can no longer be sure that the remote transaction state
      matches the local transaction state, add a facility to force all
      levels of the local transaction to abort if we've lost track of
      the remote state; without this, an apparently-successful commit of
      the local transaction might fail to commit changes made on the
      remote side.  Also, add a 60-second timeout for queries issue during
      transaction abort; if that expires, give up and mark the state of
      the connection as unknown.  Drop all such connections when we exit
      the local transaction.  Together, these changes mean that if we're
      aborting the local toplevel transaction anyway, we can just drop the
      remote connection in lieu of waiting (possibly for a very long time)
      for it to complete an abort.
      
      This still leaves quite a bit of room for improvement.  PQcancel()
      has no asynchronous interface, so if we get stuck sending the cancel
      request we'll still hang.  Also, PQsetnonblocking() is not used, which
      means we could block uninterruptibly when sending a query.  There
      might be some other optimizations possible as well.  Nonetheless,
      this allows us to escape a wait for an unresponsive remote server
      quickly in many more cases than previously.
      
      Report by Suraj Kharage.  Patch by me and Rafia Sabih.  Review
      and testing by Amit Kapila and Tushar Ahuja.
      
      Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
      ae9bfc5d
  9. Jun 04, 2017
    • Tom Lane's avatar
      Replace over-optimistic Assert in partitioning code with a runtime test. · e7941a97
      Tom Lane authored
      get_partition_parent felt that it could simply Assert that systable_getnext
      found a tuple.  This is unlike any other caller of that function, and it's
      unsafe IMO --- in fact, the reason I noticed it was that the Assert failed.
      (OK, I was working with known-inconsistent catalog contents, but I wasn't
      expecting the DB to fall over quite that violently.  The behavior in a
      non-assert-enabled build wouldn't be very nice, either.)  Fix it to do what
      other callers do, namely an actual runtime-test-and-elog.
      
      Also, standardize the wording of elog messages that are complaining about
      unexpected failure of systable_getnext.  90% of them say "could not find
      tuple for <object>", so make the remainder do likewise.  Many of the
      holdouts were using the phrasing "cache lookup failed", which is outright
      misleading since no catcache search is involved.
      e7941a97
  10. May 30, 2017
  11. May 21, 2017
  12. May 18, 2017
  13. May 17, 2017
    • Bruce Momjian's avatar
      Post-PG 10 beta1 pgindent run · a6fd7b7a
      Bruce Momjian authored
      perltidy run not included.
      a6fd7b7a
    • Tom Lane's avatar
      Preventive maintenance in advance of pgindent run. · c079673d
      Tom Lane authored
      Reformat various places in which pgindent will make a mess, and
      fix a few small violations of coding style that I happened to notice
      while perusing the diffs from a pgindent dry run.
      
      There is one actual bug fix here: the need-to-enlarge-the-buffer code
      path in icu_convert_case was obviously broken.  Perhaps it's unreachable
      in our usage?  Or maybe this is just sadly undertested.
      c079673d
  14. May 14, 2017
    • Andrew Dunstan's avatar
      Suppress indentation from Data::Dumper in regression tests · 12ad38b3
      Andrew Dunstan authored
      Ultra-modern versions of the perl Data::Dumper module have apparently
      changed how they indent output. Instead of trying to keep up we choose
      to tell it to supporess all indentation in the hstore_plperl regression
      tests.
      
      Backpatch to 9.5 where this feature was introduced.
      12ad38b3
  15. May 13, 2017
    • Andres Freund's avatar
      Fix race condition leading to hanging logical slot creation. · 955a684e
      Andres Freund authored
      The snapshot assembly during the creation of logical slots relied
      waiting for transactions in xl_running_xacts to end, by checking for
      their commit/abort records.  Unfortunately, despite locking, it is
      possible to see an xl_running_xact record listing transactions as
      ready, that have already WAL-logged an commit/abort record, as the
      locking just prevents the ProcArray to be adjusted, and the commit
      record has to be logged first.
      
      That lead to either delayed or hanging snapshot creation, because
      snapbuild.c would wait "forever" to see commit/abort records for some
      transactions.  That hang resolved only if a xl_running_xacts record
      without any running transactions happened to be logged, far from
      certain on a busy server.
      
      It's impractical to prevent that via more heavyweight locking, the
      likelihood of deadlocks and significantly increased contention would
      be too big.
      
      Instead change the initial snapshot creation to be solely based on
      tracking the oldest running transaction via
      xl_running_xacts->oldestRunningXid - that actually ends up
      significantly simplifying the code.  That has two disadvantages:
      1) Because we cannot fully "trust" the contents of xl_running_xacts,
         we cannot use it to build the initial snapshot.  Instead we have to
         wait twice for all running transactions to finish.
      2) Previously a slot, unless the race occurred, could be created when
         the all transaction perceived as running based on commit/abort
         records, now we have to wait for the next xl_running_xacts record.
      To address that, trigger logging new xl_running_xacts record from
      within snapbuild.c exactly when necessary.
      
      Unfortunately snabuild.c's SnapBuild is stored on disk, one of the
      stupider ideas of a certain Mr Freund, so we can't change it in a
      minor release.  As this is going to be backpatched, we have to hack
      around a bit to keep on-disk compatibility.  A later commit will
      rejigger that on master.
      
      Author: Andres Freund, based on a quite different patch from Petr Jelinek
      Analyzed-By: Petr Jelinek
      Reviewed-By: Petr Jelinek
      Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
      Backpatch: 9.4-, where logical decoding has been introduced
      955a684e
    • Tom Lane's avatar
      Redesign get_attstatsslot()/free_attstatsslot() for more safety and speed. · 9aab83fc
      Tom Lane authored
      The mess cleaned up in commit da075960 is clear evidence that it's a
      bug hazard to expect the caller of get_attstatsslot()/free_attstatsslot()
      to provide the correct type OID for the array elements in the slot.
      Moreover, we weren't even getting any performance benefit from that,
      since get_attstatsslot() was extracting the real type OID from the array
      anyway.  So we ought to get rid of that requirement; indeed, it would
      make more sense for get_attstatsslot() to pass back the type OID it found,
      in case the caller isn't sure what to expect, which is likely in binary-
      compatible-operator cases.
      
      Another problem with the current implementation is that if the stats array
      element type is pass-by-reference, we incur a palloc/memcpy/pfree cycle
      for each element.  That seemed acceptable when the code was written because
      we were targeting O(10) array sizes --- but these days, stats arrays are
      almost always bigger than that, sometimes much bigger.  We can save a
      significant number of cycles by doing one palloc/memcpy/pfree of the whole
      array.  Indeed, in the now-probably-common case where the array is toasted,
      that happens anyway so this method is basically free.  (Note: although the
      catcache code will inline any out-of-line toasted values, it doesn't
      decompress them.  At the other end of the size range, it doesn't expand
      short-header datums either.  In either case, DatumGetArrayTypeP would have
      to make a copy.  We do end up using an extra array copy step if the element
      type is pass-by-value and the array length is neither small enough for a
      short header nor large enough to have suffered compression.  But that
      seems like a very acceptable price for winning in pass-by-ref cases.)
      
      Hence, redesign to take these insights into account.  While at it,
      convert to an API in which we fill a struct rather than passing a bunch
      of pointers to individual output arguments.  That will make it less
      painful if we ever want further expansion of what get_attstatsslot can
      pass back.
      
      It's certainly arguable that this is new development and not something to
      push post-feature-freeze.  However, I view it as primarily bug-proofing
      and therefore something that's better to have sooner not later.  Since
      we aren't quite at beta phase yet, let's put it in.
      
      Discussion: https://postgr.es/m/16364.1494520862@sss.pgh.pa.us
      9aab83fc
  16. May 11, 2017
  17. May 08, 2017
    • Heikki Linnakangas's avatar
      Remove support for password_encryption='off' / 'plain'. · eb61136d
      Heikki Linnakangas authored
      Storing passwords in plaintext hasn't been a good idea for a very long
      time, if ever. Now seems like a good time to finally forbid it, since we're
      messing with this in PostgreSQL 10 anyway.
      
      Remove the CREATE/ALTER USER UNENCRYPTED PASSSWORD 'foo' syntax, since
      storing passwords unencrypted is no longer supported. ENCRYPTED PASSWORD
      'foo' is still accepted, but ENCRYPTED is now just a noise-word, it does
      the same as just PASSWORD 'foo'.
      
      Likewise, remove the --unencrypted option from createuser, but accept
      --encrypted as a no-op for backward compatibility. AFAICS, --encrypted was
      a no-op even before this patch, because createuser encrypted the password
      before sending it to the server even if --encrypted was not specified. It
      added the ENCRYPTED keyword to the SQL command, but since the password was
      already in encrypted form, it didn't make any difference. The documentation
      was not clear on whether that was intended or not, but it's moot now.
      
      Also, while password_encryption='on' is still accepted as an alias for
      'md5', it is now marked as hidden, so that it is not listed as an accepted
      value in error hints, for example. That's not directly related to removing
      'plain', but it seems better this way.
      
      Reviewed by Michael Paquier
      
      Discussion: https://www.postgresql.org/message-id/16e9b768-fd78-0b12-cfc1-7b6b7f238fde@iki.fi
      eb61136d
  18. Apr 25, 2017
    • Peter Eisentraut's avatar
      postgres_fdw: Fix join push down with extensions · 332bec1e
      Peter Eisentraut authored
      Objects in an extension are shippable to a foreign server if the
      extension is part of the foreign server definition's shippable
      extensions list.  But this was not properly considered in some cases
      when checking whether a join condition can be pushed to a foreign server
      and the join condition uses an object from a shippable extension.  So
      the join would never be pushed down in those cases.
      
      So, the list of extensions needs to be made available in fpinfo of the
      relation being considered to be pushed down before any expressions are
      assessed for being shippable.  Fix foreign_join_ok() to do that for a
      join relation.
      
      The code to save FDW options in fpinfo is scattered at multiple places.
      Bring all of that together into functions apply_server_options(),
      apply_table_options(), and merge_fdw_options().
      
      David Rowley and Ashutosh Bapat, per report from David Rowley
      332bec1e
  19. Apr 17, 2017
  20. Apr 14, 2017
    • Tom Lane's avatar
      Clean up manipulations of hash indexes' hasho_flag field. · 2040bb4a
      Tom Lane authored
      Standardize on testing a hash index page's type by doing
      	(opaque->hasho_flag & LH_PAGE_TYPE) == LH_xxx_PAGE
      Various places were taking shortcuts like
      	opaque->hasho_flag & LH_BUCKET_PAGE
      which while not actually wrong, is still bad practice because
      it encourages use of
      	opaque->hasho_flag & LH_UNUSED_PAGE
      which *is* wrong (LH_UNUSED_PAGE == 0, so the above is constant false).
      hash_xlog.c's hash_mask() contained such an incorrect test.
      
      This also ensures that we mask out the additional flag bits that
      hasho_flag has accreted since 9.6.  pgstattuple's pgstat_hash_page(),
      for one, was failing to do that and was thus actively broken.
      
      Also fix assorted comments that hadn't been updated to reflect the
      extended usage of hasho_flag, and fix some macros that were testing
      just "(hasho_flag & bit)" to use the less dangerous, project-approved
      form "((hasho_flag & bit) != 0)".
      
      Coverity found the bug in hash_mask(); I noted the one in
      pgstat_hash_page() through code reading.
      2040bb4a
    • Tom Lane's avatar
      Further fix pg_trgm's extraction of trigrams from regular expressions. · 1dffabed
      Tom Lane authored
      Commit 9e43e871 turns out to have been insufficient: not only is it
      necessary to track tentative parent links while considering a set of
      arc removals, but it's necessary to track tentative flag additions
      as well.  This is because we always merge arc target states into
      arc source states; therefore, when considering a merge of the final
      state with some other, it is the other state that will acquire a new
      TSTATE_FIN bit.  If there's another arc for the same color trigram
      that would cause merging of that state with the initial state, we
      failed to recognize the problem.  The test cases for the prior commit
      evidently only exercised situations where a tentative merge with the
      initial state occurs before one with the final state.  If it goes the
      other way around, we'll happily merge the initial and final states,
      either producing a broken final graph that would never match anything,
      or triggering the Assert added by the prior commit.
      
      It's tempting to consider switching the merge direction when the merge
      involves the final state, but I lack the time to analyze that idea in
      detail.  Instead just keep track of the flag changes that would result
      from proposed merges, in the same way that the prior commit tracked
      proposed parent links.
      
      Along the way, add some more debugging support, because I'm not entirely
      confident that this is the last bug here.  And tweak matters so that
      the transformed.dot file uses small integers rather than pointer values
      to identify states; that makes it more readable if you're just eyeballing
      it rather than fooling with Graphviz.  And rename a couple of identically
      named struct fields to reduce confusion.
      
      Per report from Corey Csuhta.  Add a test case based on his example.
      (Note: this case does not trigger the bug under 9.3, apparently because
      its different measurement of costs causes it to stop merging states before
      it hits the failure.  I spent some time trying to find a variant that would
      fail in 9.3, without success; but I'm sure such cases exist.)
      
      Like the previous patch, back-patch to 9.3 where this code was added.
      
      Report: https://postgr.es/m/E2B01A4B-4530-406B-8D17-2F67CF9A16BA@csuhta.com
      1dffabed
    • Peter Eisentraut's avatar
      Remove useless trailing spaces in queries in C strings · 0c22327f
      Peter Eisentraut authored
      Author: Alexander Law <exclusion@gmail.com>
      0c22327f
  21. Apr 13, 2017
    • Tom Lane's avatar
      Fix regexport.c to behave sanely with lookaround constraints. · 6cfaffc0
      Tom Lane authored
      regexport.c thought it could just ignore LACON arcs, but the correct
      behavior is to treat them as satisfiable while consuming zero input
      (rather reminiscently of commit 9f1e642d).  Otherwise, the emitted
      simplified-NFA representation may contain no paths leading from initial
      to final state, which unsurprisingly confuses pg_trgm, as seen in
      bug #14623 from Jeff Janes.
      
      Since regexport's output representation has no concept of an arc that
      consumes zero input, recurse internally to find the next normal arc(s)
      after any LACON transitions.  We'd be forced into changing that
      representation if a LACON could be the last arc reaching the final
      state, but fortunately the regex library never builds NFAs with such
      a configuration, so there always is a next normal arc.
      
      Back-patch to 9.3 where this logic was introduced.
      
      Discussion: https://postgr.es/m/20170413180503.25948.94871@wrigleys.postgresql.org
      6cfaffc0
  22. Apr 12, 2017
    • Robert Haas's avatar
      Fix pgstattuple's handling of unused hash pages. · 9cc27566
      Robert Haas authored
      Hash indexes can contain both pages which are all-zeroes (i.e.
      PageIsNew()) and pages which have been initialized but currently
      aren't used.  The latter category can happen either when a page
      has been reserved but not yet used or when it is used for a time
      and then freed.  pgstattuple was only prepared to deal with the
      pages that are actually-zeroes, which it called zero_pages.
      Rename the column to unused_pages (extension version 1.5 is
      as-yet-unreleased) and make it count both kinds of unused pages.
      
      Along the way, slightly tidy up the way we test for pages of
      various types.
      
      Robert Haas and Ashutosh Sharma, reviewed by Amit Kapila
      
      Discussion: http://postgr.es/m/CAE9k0PkTtKFB3YndOyQMjwuHx+-FtUP1ynK8E-nHtetoow3NtQ@mail.gmail.com
      9cc27566
  23. Apr 11, 2017
    • Tom Lane's avatar
      Simplify handling of remote-qual pass-forward in postgres_fdw. · 88e902b7
      Tom Lane authored
      Commit 0bf3ae88 encountered a need to pass the finally chosen remote qual
      conditions forward from postgresGetForeignPlan to postgresPlanDirectModify.
      It solved that by sticking them into the plan node's fdw_private list,
      which in hindsight was a pretty bad idea.  In the first place, there's no
      use for those qual trees either in EXPLAIN or execution; indeed they could
      never safely be used for any post-planning purposes, because they would not
      get processed by setrefs.c.  So they're just dead weight to carry around in
      the finished plan tree, plus being an attractive nuisance for somebody who
      might get the idea that they could be used that way.  Secondly, because
      those qual trees (sometimes) contained RestrictInfos, they created a
      plan-transmission hazard for parallel query, which is how come we noticed a
      problem.  We dealt with that symptom in commit 28b04787, but really a more
      straightforward and more efficient fix is to pass the data through in a new
      field of struct PgFdwRelationInfo.  So do it that way.  (There's no need
      to revert 28b04787, as it has sufficient reason to live anyway.)
      
      Per fuzz testing by Andreas Seltenreich.
      
      Discussion: https://postgr.es/m/87tw5x4vcu.fsf@credativ.de
      88e902b7
    • Tom Lane's avatar
      Handle restriction clause lists more uniformly in postgres_fdw. · 28b04787
      Tom Lane authored
      Clauses in the lists retained by postgres_fdw during planning were
      sometimes bare boolean clauses, sometimes RestrictInfos, and sometimes
      a mixture of the two in the same list.  The comment about that situation
      didn't come close to telling the full truth, either.  Aside from being
      confusing, this had a couple of bad practical consequences:
      * waste of planning cycles due to inability to cache per-clause selectivity
      and cost estimates;
      * sometimes, RestrictInfos would sneak into the fdw_private list of a
      finished Plan node, causing failures if, for example, we tried to ship
      the Plan tree to a parallel worker.
      (It may well be that it's a bug in the parallel-query logic that we
      would ever try to ship such a plan to a parallel worker, but in any
      case this deserves to be cleaned up.)
      
      To fix, rearrange so that clause lists in PgFdwRelationInfo are always
      lists of RestrictInfos, and then strip the RestrictInfos at the last
      minute when making a Plan node.  In passing do a bit of refactoring and
      comment cleanup in postgresGetForeignPlan and foreign_join_ok.
      
      Although the messiness here dates back at least to 9.6, there's no evidence
      that it causes anything worse than wasted planning cycles in 9.6, so no
      back-patch for now.
      
      Per fuzz testing by Andreas Seltenreich.
      
      Tom Lane and Ashutosh Bapat
      
      Discussion: https://postgr.es/m/87tw5x4vcu.fsf@credativ.de
      28b04787
  24. Apr 10, 2017
  25. Apr 09, 2017
  26. Apr 08, 2017
    • Tom Lane's avatar
      Optimize joins when the inner relation can be proven unique. · 9c7f5229
      Tom Lane authored
      If there can certainly be no more than one matching inner row for a given
      outer row, then the executor can move on to the next outer row as soon as
      it's found one match; there's no need to continue scanning the inner
      relation for this outer row.  This saves useless scanning in nestloop
      and hash joins.  In merge joins, it offers the opportunity to skip
      mark/restore processing, because we know we have not advanced past the
      first possible match for the next outer row.
      
      Of course, the devil is in the details: the proof of uniqueness must
      depend only on joinquals (not otherquals), and if we want to skip
      mergejoin mark/restore then it must depend only on merge clauses.
      To avoid adding more planning overhead than absolutely necessary,
      the present patch errs in the conservative direction: there are cases
      where inner_unique or skip_mark_restore processing could be used, but
      it will not do so because it's not sure that the uniqueness proof
      depended only on "safe" clauses.  This could be improved later.
      
      David Rowley, reviewed and rather heavily editorialized on by me
      
      Discussion: https://postgr.es/m/CAApHDvqF6Sw-TK98bW48TdtFJ+3a7D2mFyZ7++=D-RyPsL76gw@mail.gmail.com
      9c7f5229
    • Alvaro Herrera's avatar
      Reduce the number of pallocs() in BRIN · 8bf74967
      Alvaro Herrera authored
      Instead of allocating memory in brin_deform_tuple and brin_copy_tuple
      over and over during a scan, allow reuse of previously allocated memory.
      This is said to make for a measurable performance improvement.
      
      Author: Jinyu Zhang, Álvaro Herrera
      Reviewed by: Tomas Vondra
      Discussion: https://postgr.es/m/495deb78.4186.1500dacaa63.Coremail.beijing_pg@163.com
      8bf74967
  27. Apr 07, 2017
  28. Apr 06, 2017
Loading