Skip to content
Snippets Groups Projects
  1. May 17, 2015
  2. May 16, 2015
    • Andres Freund's avatar
      Support GROUPING SETS, CUBE and ROLLUP. · f3d31185
      Andres Freund authored
      This SQL standard functionality allows to aggregate data by different
      GROUP BY clauses at once. Each grouping set returns rows with columns
      grouped by in other sets set to NULL.
      
      This could previously be achieved by doing each grouping as a separate
      query, conjoined by UNION ALLs. Besides being considerably more concise,
      grouping sets will in many cases be faster, requiring only one scan over
      the underlying data.
      
      The current implementation of grouping sets only supports using sorting
      for input. Individual sets that share a sort order are computed in one
      pass. If there are sets that don't share a sort order, additional sort &
      aggregation steps are performed. These additional passes are sourced by
      the previous sort step; thus avoiding repeated scans of the source data.
      
      The code is structured in a way that adding support for purely using
      hash aggregation or a mix of hashing and sorting is possible. Sorting
      was chosen to be supported first, as it is the most generic method of
      implementation.
      
      Instead of, as in an earlier versions of the patch, representing the
      chain of sort and aggregation steps as full blown planner and executor
      nodes, all but the first sort are performed inside the aggregation node
      itself. This avoids the need to do some unusual gymnastics to handle
      having to return aggregated and non-aggregated tuples from underlying
      nodes, as well as having to shut down underlying nodes early to limit
      memory usage.  The optimizer still builds Sort/Agg node to describe each
      phase, but they're not part of the plan tree, but instead additional
      data for the aggregation node. They're a convenient and preexisting way
      to describe aggregation and sorting.  The first (and possibly only) sort
      step is still performed as a separate execution step. That retains
      similarity with existing group by plans, makes rescans fairly simple,
      avoids very deep plans (leading to slow explains) and easily allows to
      avoid the sorting step if the underlying data is sorted by other means.
      
      A somewhat ugly side of this patch is having to deal with a grammar
      ambiguity between the new CUBE keyword and the cube extension/functions
      named cube (and rollup). To avoid breaking existing deployments of the
      cube extension it has not been renamed, neither has cube been made a
      reserved keyword. Instead precedence hacking is used to make GROUP BY
      cube(..) refer to the CUBE grouping sets feature, and not the function
      cube(). To actually group by a function cube(), unlikely as that might
      be, the function name has to be quoted.
      
      Needs a catversion bump because stored rules may change.
      
      Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
      Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
          Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
      Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
      f3d31185
  3. May 15, 2015
    • Alvaro Herrera's avatar
      Move strategy numbers to include/access/stratnum.h · 26df7066
      Alvaro Herrera authored
      For upcoming BRIN opclasses, it's convenient to have strategy numbers
      defined in a single place.  Since there's nothing appropriate, create
      it.  The StrategyNumber typedef now lives there, as well as existing
      strategy numbers for B-trees (from skey.h) and R-tree-and-friends (from
      gist.h).  skey.h is forced to include stratnum.h because of the
      StrategyNumber typedef, but gist.h is not; extensions that currently
      rely on gist.h for rtree strategy numbers might need to add a new
      
      A few .c files can stop including skey.h and/or gist.h, which is a nice
      side benefit.
      
      Per discussion:
      https://www.postgresql.org/message-id/20150514232132.GZ2523@alvh.no-ip.org
      
      Authored by Emre Hasegeli and Álvaro.
      
      (It's not clear to me why bootscanner.l has any #include lines at all.)
      26df7066
    • Simon Riggs's avatar
      Add to contrib/Makefile · df259759
      Simon Riggs authored
      df259759
    • Simon Riggs's avatar
      contrib/tsm_system_time · 56e121a5
      Simon Riggs authored
      56e121a5
    • Simon Riggs's avatar
      contrib/tsm_system_rows · 4d40494b
      Simon Riggs authored
      4d40494b
    • Simon Riggs's avatar
      TABLESAMPLE, SQL Standard and extensible · f6d208d6
      Simon Riggs authored
      Add a TABLESAMPLE clause to SELECT statements that allows
      user to specify random BERNOULLI sampling or block level
      SYSTEM sampling. Implementation allows for extensible
      sampling functions to be written, using a standard API.
      Basic version follows SQLStandard exactly. Usable
      concrete use cases for the sampling API follow in later
      commits.
      
      Petr Jelinek
      
      Reviewed by Michael Paquier and Simon Riggs
      f6d208d6
    • Stephen Frost's avatar
      Remove useless pg_audit.conf · aff27e33
      Stephen Frost authored
      No need to have pg_audit.conf any longer since the regression tests are
      just loading the module at the start of each session (to simulate being
      in shared_preload_libraries, which isn't something we can actually make
      happen on the buildfarm itself, it seems).
      
      Pointed out by Tom
      aff27e33
    • Simon Riggs's avatar
      Separate block sampling functions · 83e176ec
      Simon Riggs authored
      Refactoring ahead of tablesample patch
      
      Requested and reviewed by Michael Paquier
      
      Petr Jelinek
      83e176ec
  4. May 14, 2015
    • Stephen Frost's avatar
      Make repeated 'make installcheck' runs work · b22b7706
      Stephen Frost authored
      In pg_audit, set client_min_messages up to warning, then reset the role
      attributes, to completely reset the session while not making the
      regression tests depend on being run by any particular user.
      b22b7706
    • Stephen Frost's avatar
      Improve pg_audit regression tests · ed6ea8e8
      Stephen Frost authored
      Instead of creating a new superuser role, extract out what the current
      user is and use that user instead.  Further, clean up and drop all
      objects created by the regression test.
      
      Pointed out by Tom.
      ed6ea8e8
    • Tom Lane's avatar
      Fix portability issue in pg_audit. · 35a1e1d1
      Tom Lane authored
      "%ld" is not a portable way to print int64's.  This may explain the
      buildfarm crashes we're seeing --- it seems to make dromedary happy,
      at least.
      35a1e1d1
    • Tom Lane's avatar
      Suppress uninitialized-variable warning. · 6c9e93d3
      Tom Lane authored
      6c9e93d3
    • Stephen Frost's avatar
      Further fixes for the buildfarm for pg_audit · 8a2e1edd
      Stephen Frost authored
      Also, use a function to load the extension ahead of all other calls,
      simulating load from shared_libraries_preload, to make sure the
      hooks are in place before logging start.
      8a2e1edd
    • Stephen Frost's avatar
      Further fixes for the buildfarm for pg_audit · c703b1e6
      Stephen Frost authored
      The database built by the buildfarm is specific to the extension, use
      \connect - instead.
      c703b1e6
    • Stephen Frost's avatar
      Fix buildfarm with regard to pg_audit · dfb7624a
      Stephen Frost authored
      Remove the check that pg_audit be installed by
      shared_preload_libraries as that's not going to work when running the
      regressions tests in the buildfarm.  That check was primairly a nice to
      have and isn't required anyway.
      dfb7624a
    • Stephen Frost's avatar
      Add pg_audit, an auditing extension · ac52bb04
      Stephen Frost authored
      This extension provides detailed logging classes, ability to control
      logging at a per-object level, and includes fully-qualified object
      names for logged statements (DML and DDL) in independent fields of the
      log output.
      
      Authors: Ian Barwick, Abhijit Menon-Sen, David Steele
      Reviews by: Robert Haas, Tatsuo Ishii, Sawada Masahiko, Fujii Masao,
      Simon Riggs
      
      Discussion with: Josh Berkus, Jaime Casanova, Peter Eisentraut,
      David Fetter, Yeb Havinga, Alvaro Herrera, Petr Jelinek, Tom Lane,
      MauMau, Bruce Momjian, Jim Nasby, Michael Paquier,
      Fabrízio de Royes Mello, Neil Tiffin
      ac52bb04
  5. May 13, 2015
    • Tom Lane's avatar
      Fix postgres_fdw to return the right ctid value in EvalPlanQual cases. · 0bb8528b
      Tom Lane authored
      If a postgres_fdw foreign table is a non-locked source relation in an
      UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the query selects its
      ctid column, the wrong value would be returned if an EvalPlanQual
      recheck occurred.  This happened because the foreign table's result row
      was copied via the ROW_MARK_COPY code path, and EvalPlanQualFetchRowMarks
      just unconditionally set the reconstructed tuple's t_self to "invalid".
      
      To fix that, we can have EvalPlanQualFetchRowMarks copy the composite
      datum's t_ctid field, and be sure to initialize that along with t_self
      when postgres_fdw constructs a tuple to return.
      
      If we just did that much then EvalPlanQualFetchRowMarks would start
      returning "(0,0)" as ctid for all other ROW_MARK_COPY cases, which perhaps
      does not matter much, but then again maybe it might.  The cause of that is
      that heap_form_tuple, which is the ultimate source of all composite datums,
      simply leaves t_ctid as zeroes in newly constructed tuples.  That seems
      like a bad idea on general principles: a field that's really not been
      initialized shouldn't appear to have a valid value.  So let's eat the
      trivial additional overhead of doing "ItemPointerSetInvalid(&(td->t_ctid))"
      in heap_form_tuple.
      
      This closes out our handling of Etsuro Fujita's report that tableoid and
      ctid weren't correctly set in postgres_fdw EvalPlanQual cases.  Along the
      way we did a great deal of work to improve FDWs' ability to control row
      locking behavior; which was not wasted effort by any means, but it didn't
      end up being a fix for this problem because that feature would be too
      expensive for postgres_fdw to use all the time.
      
      Although the fix for the tableoid misbehavior was back-patched, I'm
      hesitant to do so here; it seems far less likely that people would care
      about remote ctid than tableoid, and even such a minor behavioral change
      as this in heap_form_tuple is perhaps best not back-patched.  So commit
      to HEAD only, at least for the moment.
      
      Etsuro Fujita, with some adjustments by me
      0bb8528b
    • Andres Freund's avatar
      Add pgstattuple_approx() to the pgstattuple extension. · 5850b20f
      Andres Freund authored
      The new function allows to estimate bloat and other table level statics
      in a faster, but approximate, way. It does so by using information from
      the free space map for pages marked as all visible in the visibility
      map. The rest of the table is actually read and free space/bloat is
      measured accurately.  In many cases that allows to get bloat information
      much quicker, causing less IO.
      
      Author: Abhijit Menon-Sen
      Reviewed-By: Andres Freund, Amit Kapila and Tomas Vondra
      Discussion: 20140402214144.GA28681@kea.toroid.org
      5850b20f
  6. May 12, 2015
  7. May 10, 2015
    • Tom Lane's avatar
      Code review for foreign/custom join pushdown patch. · 1a8a4e5c
      Tom Lane authored
      Commit e7cb7ee1 included some design
      decisions that seem pretty questionable to me, and there was quite a lot
      of stuff not to like about the documentation and comments.  Clean up
      as follows:
      
      * Consider foreign joins only between foreign tables on the same server,
      rather than between any two foreign tables with the same underlying FDW
      handler function.  In most if not all cases, the FDW would simply have had
      to apply the same-server restriction itself (far more expensively, both for
      lack of caching and because it would be repeated for each combination of
      input sub-joins), or else risk nasty bugs.  Anyone who's really intent on
      doing something outside this restriction can always use the
      set_join_pathlist_hook.
      
      * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist
      to better reflect what they're for, and allow these custom scan tlists
      to be used even for base relations.
      
      * Change make_foreignscan() API to include passing the fdw_scan_tlist
      value, since the FDW is required to set that.  Backwards compatibility
      doesn't seem like an adequate reason to expect FDWs to set it in some
      ad-hoc extra step, and anyway existing FDWs can just pass NIL.
      
      * Change the API of path-generating subroutines of add_paths_to_joinrel,
      and in particular that of GetForeignJoinPaths and set_join_pathlist_hook,
      so that various less-used parameters are passed in a struct rather than
      as separate parameter-list entries.  The objective here is to reduce the
      probability that future additions to those parameter lists will result in
      source-level API breaks for users of these hooks.  It's possible that this
      is even a small win for the core code, since most CPU architectures can't
      pass more than half a dozen parameters efficiently anyway.  I kept root,
      joinrel, outerrel, innerrel, and jointype as separate parameters to reduce
      code churn in joinpath.c --- in particular, putting jointype into the
      struct would have been problematic because of the subroutines' habit of
      changing their local copies of that variable.
      
      * Avoid ad-hocery in ExecAssignScanProjectionInfo.  It was probably all
      right for it to know about IndexOnlyScan, but if the list is to grow
      we should refactor the knowledge out to the callers.
      
      * Restore nodeForeignscan.c's previous use of the relcache to avoid
      extra GetFdwRoutine lookups for base-relation scans.
      
      * Lots of cleanup of documentation and missed comments.  Re-order some
      code additions into more logical places.
      1a8a4e5c
  8. May 09, 2015
    • Andrew Dunstan's avatar
      Add new OID alias type regrole · 0c90f676
      Andrew Dunstan authored
      The new type has the scope of whole the database cluster so it doesn't
      behave the same as the existing OID alias types which have database
      scope,
      concerning object dependency. To avoid confusion constants of the new
      type are prohibited from appearing where dependencies are made involving
      it.
      
      Also, add a note to the docs about possible MVCC violation and
      optimization issues, which are general over the all reg* types.
      
      Kyotaro Horiguchi
      0c90f676
  9. May 08, 2015
    • Andres Freund's avatar
      Remove dependency on ordering in logical decoding upsert test. · 581f4f96
      Andres Freund authored
      Buildfarm member magpie sorted the output differently than intended by
      Peter. "Resolve" the problem by simply not aggregating, it's not that
      many lines.
      581f4f96
    • Andres Freund's avatar
      Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. · 168d5805
      Andres Freund authored
      The newly added ON CONFLICT clause allows to specify an alternative to
      raising a unique or exclusion constraint violation error when inserting.
      ON CONFLICT refers to constraints that can either be specified using a
      inference clause (by specifying the columns of a unique constraint) or
      by naming a unique or exclusion constraint.  DO NOTHING avoids the
      constraint violation, without touching the pre-existing row.  DO UPDATE
      SET ... [WHERE ...] updates the pre-existing tuple, and has access to
      both the tuple proposed for insertion and the existing tuple; the
      optional WHERE clause can be used to prevent an update from being
      executed.  The UPDATE SET and WHERE clauses have access to the tuple
      proposed for insertion using the "magic" EXCLUDED alias, and to the
      pre-existing tuple using the table name or its alias.
      
      This feature is often referred to as upsert.
      
      This is implemented using a new infrastructure called "speculative
      insertion". It is an optimistic variant of regular insertion that first
      does a pre-check for existing tuples and then attempts an insert.  If a
      violating tuple was inserted concurrently, the speculatively inserted
      tuple is deleted and a new attempt is made.  If the pre-check finds a
      matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
      If the insertion succeeds without detecting a conflict, the tuple is
      deemed inserted.
      
      To handle the possible ambiguity between the excluded alias and a table
      named excluded, and for convenience with long relation names, INSERT
      INTO now can alias its target table.
      
      Bumps catversion as stored rules change.
      
      Author: Peter Geoghegan, with significant contributions from Heikki
          Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
      Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
          Dean Rasheed, Stephen Frost and many others.
      168d5805
    • Andres Freund's avatar
      Represent columns requiring insert and update privileges indentently. · 2c8f4836
      Andres Freund authored
      Previously, relation range table entries used a single Bitmapset field
      representing which columns required either UPDATE or INSERT privileges,
      despite the fact that INSERT and UPDATE privileges are separately
      cataloged, and may be independently held.  As statements so far required
      either insert or update privileges but never both, that was
      sufficient. The required permission could be inferred from the top level
      statement run.
      
      The upcoming INSERT ... ON CONFLICT UPDATE feature needs to
      independently check for both privileges in one statement though, so that
      is not sufficient anymore.
      
      Bumps catversion as stored rules change.
      
      Author: Peter Geoghegan
      Reviewed-By: Andres Freund
      2c8f4836
  10. May 07, 2015
    • Alvaro Herrera's avatar
      Improve BRIN infra, minmax opclass and regression test · db5f98ab
      Alvaro Herrera authored
      The minmax opclass was using the wrong support functions when
      cross-datatypes queries were run.  Instead of trying to fix the
      pg_amproc definitions (which apparently is not possible), use the
      already correct pg_amop entries instead.  This requires jumping through
      more hoops (read: extra syscache lookups) to obtain the underlying
      functions to execute, but it is necessary for correctness.
      
      Author: Emre Hasegeli, tweaked by Álvaro
      Review: Andreas Karlsson
      
      Also change BrinOpcInfo to record each stored type's typecache entry
      instead of just the OID.  Turns out that the full type cache is
      necessary in brin_deform_tuple: the original code used the indexed
      type's byval and typlen properties to extract the stored tuple, which is
      correct in Minmax; but in other implementations that want to store
      something different, that's wrong.  The realization that this is a bug
      comes from Emre also, but I did not use his patch.
      
      I also adopted Emre's regression test code (with smallish changes),
      which is more complete.
      db5f98ab
  11. May 05, 2015
    • Tom Lane's avatar
      Fix incorrect declaration of citext's regexp_matches() functions. · b22527f2
      Tom Lane authored
      These functions should return SETOF TEXT[], like the core functions they
      are wrappers for; but they were incorrectly declared as returning just
      TEXT[].  This mistake had two results: first, if there was no match you got
      a scalar null result, whereas what you should get is an empty set (zero
      rows).  Second, the 'g' flag was effectively ignored, since you would get
      only one result array even if there were multiple matches, as reported by
      Jeff Certain.
      
      While ignoring 'g' is a clear bug, the behavior for no matches might well
      have been thought to be the intended behavior by people who hadn't compared
      it carefully to the core regexp_matches() functions.  So we should tread
      carefully about introducing this change in the back branches.  Still, it
      clearly is a bug and so providing some fix is desirable.
      
      After discussion, the conclusion was to introduce the change in a 1.1
      version of the citext extension (as we would need to do anyway); 1.0 still
      contains the incorrect behavior.  1.1 is the default and only available
      version in HEAD, but it is optional in the back branches, where 1.0 remains
      the default version.  People wishing to adopt the fix in back branches will
      need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'.  (I also
      provided a downgrade script in the back branches, so people could go back
      to 1.0 if necessary.)
      
      This should be called out as an incompatible change in the 9.5 release
      notes, although we'll also document it in the next set of back-branch
      release notes.  The notes should mention that any views or rules that use
      citext's regexp_matches() functions will need to be dropped before
      upgrading to 1.1, and then recreated again afterwards.
      
      Back-patch to 9.1.  The bug goes all the way back to citext's introduction
      in 8.4, but pre-9.1 there is no extension mechanism with which to manage
      the change.  Given the lack of previous complaints it seems unnecessary to
      change this behavior in 9.0, anyway.
      b22527f2
    • Peter Eisentraut's avatar
      hstore_plpython: Support tests on Python 2.3 · c0574cd5
      Peter Eisentraut authored
      Python 2.3 does not have the sorted() function, so do it the long way.
      c0574cd5
  12. May 03, 2015
    • Andrew Dunstan's avatar
      Enable transforms modules to build and run with Mingw builds. · f802c6dd
      Andrew Dunstan authored
      These modules were all missing essential Windows scaffolding, including
      resources files and descriptions, and links to the relevant library
      import files. This latter item means that the modules can't be built
      with pgxs on Windows, as we don't install the import files. If we ever
      decide to install them this restriction could probably be removed.
      
      Also, as with plperl we need to make sure that perl's CORE directory is
      last on the include list, as on Windows it appears to contain some
      headers with names that clash with names of some headers we include.
      f802c6dd
  13. May 02, 2015
  14. May 01, 2015
    • Andrew Dunstan's avatar
      Make hstore_plperl's build more like plperl's · 77477e74
      Andrew Dunstan authored
      This involves moving perl's CORE library to the end of the include list,
      and adding other compilation settings that plperl uses. This won't
      completely fix the breakage currently being seen by gcc builds on
      Windows, but it will let the build get further, and should be wholly
      benign, if not beneficial, on *nix.
      77477e74
  15. Apr 30, 2015
    • Robert Haas's avatar
      Create an infrastructure for parallel computation in PostgreSQL. · 924bcf4f
      Robert Haas authored
      This does four basic things.  First, it provides convenience routines
      to coordinate the startup and shutdown of parallel workers.  Second,
      it synchronizes various pieces of state (e.g. GUCs, combo CID
      mappings, transaction snapshot) from the parallel group leader to the
      worker processes.  Third, it prohibits various operations that would
      result in unsafe changes to that state while parallelism is active.
      Finally, it propagates events that would result in an ErrorResponse,
      NoticeResponse, or NotifyResponse message being sent to the client
      from the parallel workers back to the master, from which they can then
      be sent on to the client.
      
      Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke.
      Suggestions and review from Andres Freund, Heikki Linnakangas, Noah
      Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
      924bcf4f
    • Peter Eisentraut's avatar
      Fix parallel make risk with new check temp-install setup · dbf2ec1a
      Peter Eisentraut authored
      The "check" target no longer needs to depend on "all", because it now
      runs "install" directly, which in turn depends on "all".  Doing both
      will cause problems with parallel make, because two builds will run next
      to each other.
      
      Also remove the redirection of the temp-install output into a log file.
      This was appropriate when this was done from within pg_regress, but now
      it's just a regular make run, and especially with the above changes this
      will now take the place of running the "all" target before the test
      suites.
      
      problem report by Jeff Janes, patch in part by Michael Paquier
      dbf2ec1a
  16. Apr 29, 2015
    • Andres Freund's avatar
      Introduce replication progress tracking infrastructure. · 5aa23504
      Andres Freund authored
      When implementing a replication solution ontop of logical decoding, two
      related problems exist:
      * How to safely keep track of replication progress
      * How to change replication behavior, based on the origin of a row;
        e.g. to avoid loops in bi-directional replication setups
      
      The solution to these problems, as implemented here, consist out of
      three parts:
      
      1) 'replication origins', which identify nodes in a replication setup.
      2) 'replication progress tracking', which remembers, for each
         replication origin, how far replay has progressed in a efficient and
         crash safe manner.
      3) The ability to filter out changes performed on the behest of a
         replication origin during logical decoding; this allows complex
         replication topologies. E.g. by filtering all replayed changes out.
      
      Most of this could also be implemented in "userspace", e.g. by inserting
      additional rows contain origin information, but that ends up being much
      less efficient and more complicated.  We don't want to require various
      replication solutions to reimplement logic for this independently. The
      infrastructure is intended to be generic enough to be reusable.
      
      This infrastructure also replaces the 'nodeid' infrastructure of commit
      timestamps. It is intended to provide all the former capabilities,
      except that there's only 2^16 different origins; but now they integrate
      with logical decoding. Additionally more functionality is accessible via
      SQL.  Since the commit timestamp infrastructure has also been introduced
      in 9.5 (commit 73c986ad) changing the API is not a problem.
      
      For now the number of origins for which the replication progress can be
      tracked simultaneously is determined by the max_replication_slots
      GUC. That GUC is not a perfect match to configure this, but there
      doesn't seem to be sufficient reason to introduce a separate new one.
      
      Bumps both catversion and wal page magic.
      
      Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer
      Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer
      Discussion: 20150216002155.GI15326@awork2.anarazel.de,
          20140923182422.GA15776@alap3.anarazel.de,
          20131114172632.GE7522@alap2.anarazel.de
      5aa23504
  17. Apr 26, 2015
    • Peter Eisentraut's avatar
      Fix hstore_plperl regression tests on some platforms · f9542547
      Peter Eisentraut authored
      On some platforms, plperl and plperlu cannot be loaded at the same
      time.  So split the test into two separate test files.
      f9542547
    • Peter Eisentraut's avatar
      Add transforms feature · cac76582
      Peter Eisentraut authored
      This provides a mechanism for specifying conversions between SQL data
      types and procedural languages.  As examples, there are transforms
      for hstore and ltree for PL/Perl and PL/Python.
      
      reviews by Pavel Stěhule and Andres Freund
      cac76582
Loading