- May 15, 2015
-
-
Simon Riggs authored
-
Simon Riggs authored
Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
-
Stephen Frost authored
No need to have pg_audit.conf any longer since the regression tests are just loading the module at the start of each session (to simulate being in shared_preload_libraries, which isn't something we can actually make happen on the buildfarm itself, it seems). Pointed out by Tom
-
Simon Riggs authored
Refactoring ahead of tablesample patch Requested and reviewed by Michael Paquier Petr Jelinek
-
- May 14, 2015
-
-
Stephen Frost authored
In pg_audit, set client_min_messages up to warning, then reset the role attributes, to completely reset the session while not making the regression tests depend on being run by any particular user.
-
Stephen Frost authored
Instead of creating a new superuser role, extract out what the current user is and use that user instead. Further, clean up and drop all objects created by the regression test. Pointed out by Tom.
-
Tom Lane authored
"%ld" is not a portable way to print int64's. This may explain the buildfarm crashes we're seeing --- it seems to make dromedary happy, at least.
-
Tom Lane authored
-
Stephen Frost authored
Also, use a function to load the extension ahead of all other calls, simulating load from shared_libraries_preload, to make sure the hooks are in place before logging start.
-
Stephen Frost authored
The database built by the buildfarm is specific to the extension, use \connect - instead.
-
Stephen Frost authored
Remove the check that pg_audit be installed by shared_preload_libraries as that's not going to work when running the regressions tests in the buildfarm. That check was primairly a nice to have and isn't required anyway.
-
Stephen Frost authored
This extension provides detailed logging classes, ability to control logging at a per-object level, and includes fully-qualified object names for logged statements (DML and DDL) in independent fields of the log output. Authors: Ian Barwick, Abhijit Menon-Sen, David Steele Reviews by: Robert Haas, Tatsuo Ishii, Sawada Masahiko, Fujii Masao, Simon Riggs Discussion with: Josh Berkus, Jaime Casanova, Peter Eisentraut, David Fetter, Yeb Havinga, Alvaro Herrera, Petr Jelinek, Tom Lane, MauMau, Bruce Momjian, Jim Nasby, Michael Paquier, Fabrízio de Royes Mello, Neil Tiffin
-
- May 13, 2015
-
-
Tom Lane authored
If a postgres_fdw foreign table is a non-locked source relation in an UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the query selects its ctid column, the wrong value would be returned if an EvalPlanQual recheck occurred. This happened because the foreign table's result row was copied via the ROW_MARK_COPY code path, and EvalPlanQualFetchRowMarks just unconditionally set the reconstructed tuple's t_self to "invalid". To fix that, we can have EvalPlanQualFetchRowMarks copy the composite datum's t_ctid field, and be sure to initialize that along with t_self when postgres_fdw constructs a tuple to return. If we just did that much then EvalPlanQualFetchRowMarks would start returning "(0,0)" as ctid for all other ROW_MARK_COPY cases, which perhaps does not matter much, but then again maybe it might. The cause of that is that heap_form_tuple, which is the ultimate source of all composite datums, simply leaves t_ctid as zeroes in newly constructed tuples. That seems like a bad idea on general principles: a field that's really not been initialized shouldn't appear to have a valid value. So let's eat the trivial additional overhead of doing "ItemPointerSetInvalid(&(td->t_ctid))" in heap_form_tuple. This closes out our handling of Etsuro Fujita's report that tableoid and ctid weren't correctly set in postgres_fdw EvalPlanQual cases. Along the way we did a great deal of work to improve FDWs' ability to control row locking behavior; which was not wasted effort by any means, but it didn't end up being a fix for this problem because that feature would be too expensive for postgres_fdw to use all the time. Although the fix for the tableoid misbehavior was back-patched, I'm hesitant to do so here; it seems far less likely that people would care about remote ctid than tableoid, and even such a minor behavioral change as this in heap_form_tuple is perhaps best not back-patched. So commit to HEAD only, at least for the moment. Etsuro Fujita, with some adjustments by me
-
Andres Freund authored
The new function allows to estimate bloat and other table level statics in a faster, but approximate, way. It does so by using information from the free space map for pages marked as all visible in the visibility map. The rest of the table is actually read and free space/bloat is measured accurately. In many cases that allows to get bloat information much quicker, causing less IO. Author: Abhijit Menon-Sen Reviewed-By: Andres Freund, Amit Kapila and Tomas Vondra Discussion: 20140402214144.GA28681@kea.toroid.org
-
- May 12, 2015
-
-
Peter Eisentraut authored
Author: David Rowley <dgrowleyml@gmail.com>
-
- May 10, 2015
-
-
Tom Lane authored
Commit e7cb7ee1 included some design decisions that seem pretty questionable to me, and there was quite a lot of stuff not to like about the documentation and comments. Clean up as follows: * Consider foreign joins only between foreign tables on the same server, rather than between any two foreign tables with the same underlying FDW handler function. In most if not all cases, the FDW would simply have had to apply the same-server restriction itself (far more expensively, both for lack of caching and because it would be repeated for each combination of input sub-joins), or else risk nasty bugs. Anyone who's really intent on doing something outside this restriction can always use the set_join_pathlist_hook. * Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist to better reflect what they're for, and allow these custom scan tlists to be used even for base relations. * Change make_foreignscan() API to include passing the fdw_scan_tlist value, since the FDW is required to set that. Backwards compatibility doesn't seem like an adequate reason to expect FDWs to set it in some ad-hoc extra step, and anyway existing FDWs can just pass NIL. * Change the API of path-generating subroutines of add_paths_to_joinrel, and in particular that of GetForeignJoinPaths and set_join_pathlist_hook, so that various less-used parameters are passed in a struct rather than as separate parameter-list entries. The objective here is to reduce the probability that future additions to those parameter lists will result in source-level API breaks for users of these hooks. It's possible that this is even a small win for the core code, since most CPU architectures can't pass more than half a dozen parameters efficiently anyway. I kept root, joinrel, outerrel, innerrel, and jointype as separate parameters to reduce code churn in joinpath.c --- in particular, putting jointype into the struct would have been problematic because of the subroutines' habit of changing their local copies of that variable. * Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all right for it to know about IndexOnlyScan, but if the list is to grow we should refactor the knowledge out to the callers. * Restore nodeForeignscan.c's previous use of the relcache to avoid extra GetFdwRoutine lookups for base-relation scans. * Lots of cleanup of documentation and missed comments. Re-order some code additions into more logical places.
-
- May 09, 2015
-
-
Andrew Dunstan authored
The new type has the scope of whole the database cluster so it doesn't behave the same as the existing OID alias types which have database scope, concerning object dependency. To avoid confusion constants of the new type are prohibited from appearing where dependencies are made involving it. Also, add a note to the docs about possible MVCC violation and optimization issues, which are general over the all reg* types. Kyotaro Horiguchi
-
- May 08, 2015
-
-
Andres Freund authored
Buildfarm member magpie sorted the output differently than intended by Peter. "Resolve" the problem by simply not aggregating, it's not that many lines.
-
Andres Freund authored
The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
-
Andres Freund authored
Previously, relation range table entries used a single Bitmapset field representing which columns required either UPDATE or INSERT privileges, despite the fact that INSERT and UPDATE privileges are separately cataloged, and may be independently held. As statements so far required either insert or update privileges but never both, that was sufficient. The required permission could be inferred from the top level statement run. The upcoming INSERT ... ON CONFLICT UPDATE feature needs to independently check for both privileges in one statement though, so that is not sufficient anymore. Bumps catversion as stored rules change. Author: Peter Geoghegan Reviewed-By: Andres Freund
-
- May 07, 2015
-
-
Alvaro Herrera authored
The minmax opclass was using the wrong support functions when cross-datatypes queries were run. Instead of trying to fix the pg_amproc definitions (which apparently is not possible), use the already correct pg_amop entries instead. This requires jumping through more hoops (read: extra syscache lookups) to obtain the underlying functions to execute, but it is necessary for correctness. Author: Emre Hasegeli, tweaked by Álvaro Review: Andreas Karlsson Also change BrinOpcInfo to record each stored type's typecache entry instead of just the OID. Turns out that the full type cache is necessary in brin_deform_tuple: the original code used the indexed type's byval and typlen properties to extract the stored tuple, which is correct in Minmax; but in other implementations that want to store something different, that's wrong. The realization that this is a bug comes from Emre also, but I did not use his patch. I also adopted Emre's regression test code (with smallish changes), which is more complete.
-
- May 05, 2015
-
-
Tom Lane authored
These functions should return SETOF TEXT[], like the core functions they are wrappers for; but they were incorrectly declared as returning just TEXT[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you should get is an empty set (zero rows). Second, the 'g' flag was effectively ignored, since you would get only one result array even if there were multiple matches, as reported by Jeff Certain. While ignoring 'g' is a clear bug, the behavior for no matches might well have been thought to be the intended behavior by people who hadn't compared it carefully to the core regexp_matches() functions. So we should tread carefully about introducing this change in the back branches. Still, it clearly is a bug and so providing some fix is desirable. After discussion, the conclusion was to introduce the change in a 1.1 version of the citext extension (as we would need to do anyway); 1.0 still contains the incorrect behavior. 1.1 is the default and only available version in HEAD, but it is optional in the back branches, where 1.0 remains the default version. People wishing to adopt the fix in back branches will need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'. (I also provided a downgrade script in the back branches, so people could go back to 1.0 if necessary.) This should be called out as an incompatible change in the 9.5 release notes, although we'll also document it in the next set of back-branch release notes. The notes should mention that any views or rules that use citext's regexp_matches() functions will need to be dropped before upgrading to 1.1, and then recreated again afterwards. Back-patch to 9.1. The bug goes all the way back to citext's introduction in 8.4, but pre-9.1 there is no extension mechanism with which to manage the change. Given the lack of previous complaints it seems unnecessary to change this behavior in 9.0, anyway.
-
Peter Eisentraut authored
Python 2.3 does not have the sorted() function, so do it the long way.
-
- May 03, 2015
-
-
Andrew Dunstan authored
These modules were all missing essential Windows scaffolding, including resources files and descriptions, and links to the relevant library import files. This latter item means that the modules can't be built with pgxs on Windows, as we don't install the import files. If we ever decide to install them this restriction could probably be removed. Also, as with plperl we need to make sure that perl's CORE directory is last on the include list, as on Windows it appears to contain some headers with names that clash with names of some headers we include.
-
- May 02, 2015
-
-
Peter Eisentraut authored
PORTNAME isn't set until the global makefiles have been included.
-
Peter Eisentraut authored
Combine the two places that set CPPFLAGS into one. Also, some settings should be restricted to Windows only. More precisely, -Wno-comment is a GCC-only option, but Windows in a makefile implies GCC at the moment. Also, since -Wno-comment is more properly a preprocessor option, move it to CPPFLAGS to simplify things a bit.
-
- May 01, 2015
-
-
Andrew Dunstan authored
This involves moving perl's CORE library to the end of the include list, and adding other compilation settings that plperl uses. This won't completely fix the breakage currently being seen by gcc builds on Windows, but it will let the build get further, and should be wholly benign, if not beneficial, on *nix.
-
- Apr 30, 2015
-
-
Robert Haas authored
This does four basic things. First, it provides convenience routines to coordinate the startup and shutdown of parallel workers. Second, it synchronizes various pieces of state (e.g. GUCs, combo CID mappings, transaction snapshot) from the parallel group leader to the worker processes. Third, it prohibits various operations that would result in unsafe changes to that state while parallelism is active. Finally, it propagates events that would result in an ErrorResponse, NoticeResponse, or NotifyResponse message being sent to the client from the parallel workers back to the master, from which they can then be sent on to the client. Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke. Suggestions and review from Andres Freund, Heikki Linnakangas, Noah Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
-
Peter Eisentraut authored
The "check" target no longer needs to depend on "all", because it now runs "install" directly, which in turn depends on "all". Doing both will cause problems with parallel make, because two builds will run next to each other. Also remove the redirection of the temp-install output into a log file. This was appropriate when this was done from within pg_regress, but now it's just a regular make run, and especially with the above changes this will now take the place of running the "all" target before the test suites. problem report by Jeff Janes, patch in part by Michael Paquier
-
- Apr 29, 2015
-
-
Andres Freund authored
When implementing a replication solution ontop of logical decoding, two related problems exist: * How to safely keep track of replication progress * How to change replication behavior, based on the origin of a row; e.g. to avoid loops in bi-directional replication setups The solution to these problems, as implemented here, consist out of three parts: 1) 'replication origins', which identify nodes in a replication setup. 2) 'replication progress tracking', which remembers, for each replication origin, how far replay has progressed in a efficient and crash safe manner. 3) The ability to filter out changes performed on the behest of a replication origin during logical decoding; this allows complex replication topologies. E.g. by filtering all replayed changes out. Most of this could also be implemented in "userspace", e.g. by inserting additional rows contain origin information, but that ends up being much less efficient and more complicated. We don't want to require various replication solutions to reimplement logic for this independently. The infrastructure is intended to be generic enough to be reusable. This infrastructure also replaces the 'nodeid' infrastructure of commit timestamps. It is intended to provide all the former capabilities, except that there's only 2^16 different origins; but now they integrate with logical decoding. Additionally more functionality is accessible via SQL. Since the commit timestamp infrastructure has also been introduced in 9.5 (commit 73c986ad) changing the API is not a problem. For now the number of origins for which the replication progress can be tracked simultaneously is determined by the max_replication_slots GUC. That GUC is not a perfect match to configure this, but there doesn't seem to be sufficient reason to introduce a separate new one. Bumps both catversion and wal page magic. Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer Discussion: 20150216002155.GI15326@awork2.anarazel.de, 20140923182422.GA15776@alap3.anarazel.de, 20131114172632.GE7522@alap2.anarazel.de
-
- Apr 26, 2015
-
-
Peter Eisentraut authored
On some platforms, plperl and plperlu cannot be loaded at the same time. So split the test into two separate test files.
-
Peter Eisentraut authored
This provides a mechanism for specifying conversions between SQL data types and procedural languages. As examples, there are transforms for hstore and ltree for PL/Perl and PL/Python. reviews by Pavel Stěhule and Andres Freund
-
- Apr 23, 2015
-
-
Peter Eisentraut authored
Before, make check-world would create a new temporary installation for each test suite, which is slow and wasteful. Instead, we now create one test installation that is used by all test suites that are part of a make run. The management of the temporary installation is removed from pg_regress and handled in the makefiles. This allows for better control, and unifies the code with that of test suites not run through pg_regress. review and msvc support by Michael Paquier <michael.paquier@gmail.com> more review by Fabien Coelho <coelho@cri.ensmp.fr>
-
- Apr 22, 2015
-
-
Stephen Frost authored
As noted by Etsuro Fujita [1] and Dean Rasheed[2], cb1ca4d8 changed ExecBuildAuxRowMark() to always look for the tableoid in the target list, but didn't also change preprocess_targetlist() to always include the tableoid. This resulted in errors with soon-to-be-added RLS with inheritance tests, and errors when using inheritance with foreign tables. Authors: Etsuro Fujita and Dean Rasheed (independently) Minor word-smithing on the comments by me. [1] 552CF0B6.8010006@lab.ntt.co.jp [2] CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
-
Andres Freund authored
In d811c037 active_in was added but discussion since showed that active_pid is preferred as a name. Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
-
Peter Eisentraut authored
Reviewed-by:
Michael Paquier <michael.paquier@gmail.com>
-
- Apr 21, 2015
-
-
Andres Freund authored
Right now it is visible whether a replication slot is active in any session, but not in which. Adding the active_in column, containing the pid of the backend having acquired the slot, makes it much easier to associate pg_replication_slots entries with the corresponding pg_stat_replication/pg_stat_activity row. This should have been done from the start, but I (Andres) dropped the ball there somehow. Author: Craig Ringer, revised by me Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
-
Peter Eisentraut authored
Reviewed-by:
Michael Paquier <michael.paquier@gmail.com>
-
- Apr 20, 2015
-
-
Peter Eisentraut authored
Reviewed-by:
Michael Paquier <michael.paquier@gmail.com>
-