- Jul 16, 2015
-
-
Noah Misch authored
The MSVC build system already did this, and building against Python 3 requires it. Back-patch to 9.5, where the module was introduced.
-
- Jul 14, 2015
-
-
Fujii Masao authored
Also this patch removes obsolete comment. Back-patch to 9.5 where BRIN index was added.
-
- Jul 09, 2015
-
-
Noah Misch authored
The AIX 7.1 libm is static, and AIX postgres executables do not export symbols acquired from libraries. Back-patch to 9.5, where commit cfe12763 added a sqrt() call.
-
- Jul 02, 2015
-
-
Tom Lane authored
This allows convenient checking for existence of a GUC from SQL, which is particularly useful when dealing with custom variables. David Christensen, reviewed by Jeevan Chalke
-
Heikki Linnakangas authored
Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in 9.5, and keeping the code in sync with master makes future backpatching easier.
-
Fujii Masao authored
Commit 179cdd09 added macros to check if a filename is a WAL segment or other such file. However there were still some instances of the strlen + strspn combination to check for that in WAL-related utilities like pg_archivecleanup. Those checks can be replaced with the macros. This patch makes use of the macros in those utilities and which would make the code a bit easier to read. Back-patch to 9.5. Michael Paquier
-
- Jun 27, 2015
-
-
Andres Freund authored
test_decoding used fastgetattr() to extract column values. That's wrong when decoding updates and deletes if a table's replica identity is set to FULL and new columns have been added since the old version of the tuple was created. Due to the lack of a crosscheck with the datum's natts values an invalid value will be output, leading to errors or worse. Bug: #13470 Reported-By: Krzysztof Kotlarski Discussion: 20150626100333.3874.90852@wrigleys.postgresql.org Backpatch to 9.4, where the feature, including the bug, was added.
-
- May 31, 2015
-
-
Peter Eisentraut authored
Newer Python versions randomize the hash seed for dictionaries, resulting in a random output order, which messes up the regression test diffs. Instead, use Python assert to compare the dictionaries with their expected value.
-
- May 28, 2015
-
-
Stephen Frost authored
-
Stephen Frost authored
This removes pg_audit, per discussion: 20150528082038.GU26667@tamriel.snowman.net
-
- May 24, 2015
-
-
Tom Lane authored
Fix some places where pgindent did silly stuff, often because project style wasn't followed to begin with. (I've not touched the atomics headers, though.)
-
Tom Lane authored
Remove a bunch of "extern Datum foo(PG_FUNCTION_ARGS);" declarations that are no longer needed now that PG_FUNCTION_INFO_V1(foo) provides that. Some of these were evidently missed in commit e7128e8d, but others were cargo-culted in in code added since then. Possibly that can be blamed in part on the fact that we'd not fixed relevant documentation examples, which I've now done.
-
Bruce Momjian authored
-
- May 20, 2015
-
-
Heikki Linnakangas authored
Use "a" and "an" correctly, mostly in comments. Two error messages were also fixed (they were just elogs, so no translation work required). Two function comments in pg_proc.h were also fixed. Etsuro Fujita reported one of these, but I found a lot more with grep. Also fix a few other typos spotted while grepping for the a/an typos. For example, "consists out of ..." -> "consists of ...". Plus a "though"/ "through" mixup reported by Euler Taveira. Many of these typos were in old code, which would be nice to backpatch to make future backpatching easier. But much of the code was new, and I didn't feel like crafting separate patches for each branch. So no backpatching.
-
- May 19, 2015
-
-
Andres Freund authored
Defer lookup of opfamily and input type of a of a user specified opclass until the optimizer selects among available unique indexes; and store the opclass in the parse analyzed tree instead. The primary reason for doing this is that for rule deparsing it's easier to use the opclass than the previous representation. While at it also rename a variable in the inference code to better fit it's purpose. This is separate from the actual fixes for deparsing to make review easier.
-
Peter Eisentraut authored
The plain C string language name needs to be wrapped in makeString() so that the parse tree is copyable. This is detectable by -DCOPY_PARSE_PLAN_TREES. Add a test case for the COMMENT case. Also make the quoting in the error messages more consistent. discovered by Tom Lane
-
- May 18, 2015
-
-
Noah Misch authored
This has been the predominant outcome. When the output of decrypting with a wrong key coincidentally resembled an OpenPGP packet header, pgcrypto could instead report "Corrupt data", "Not text data" or "Unsupported compression algorithm". The distinct "Corrupt data" message added no value. The latter two error messages misled when the decrypted payload also exhibited fundamental integrity problems. Worse, error message variance in other systems has enabled cryptologic attacks; see RFC 4880 section "14. Security Considerations". Whether these pgcrypto behaviors are likewise exploitable is unknown. In passing, document that pgcrypto does not resist side-channel attacks. Back-patch to 9.0 (all supported versions). Security: CVE-2015-3167
-
Tom Lane authored
The previous coding in hstore_plpython and ltree_plpython wiped out any values set by the base makefiles. This at least had the effect of running the tests in "regression" not "contrib_regression" as expected. These being pretty new modules, there might be other bad effects we'd not noticed yet.
-
- May 17, 2015
-
-
Stephen Frost authored
Clean up the Makefile, per Michael Paquier. Classify REINDEX as we do in core, use '1.0' for the version, per Fujii.
-
Magnus Hagander authored
Dmitriy Olshevskiy
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- May 16, 2015
-
-
Andres Freund authored
This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
-
- May 15, 2015
-
-
Alvaro Herrera authored
For upcoming BRIN opclasses, it's convenient to have strategy numbers defined in a single place. Since there's nothing appropriate, create it. The StrategyNumber typedef now lives there, as well as existing strategy numbers for B-trees (from skey.h) and R-tree-and-friends (from gist.h). skey.h is forced to include stratnum.h because of the StrategyNumber typedef, but gist.h is not; extensions that currently rely on gist.h for rtree strategy numbers might need to add a new A few .c files can stop including skey.h and/or gist.h, which is a nice side benefit. Per discussion: https://www.postgresql.org/message-id/20150514232132.GZ2523@alvh.no-ip.org Authored by Emre Hasegeli and Álvaro. (It's not clear to me why bootscanner.l has any #include lines at all.)
-
Simon Riggs authored
-
Simon Riggs authored
-
Simon Riggs authored
-
Simon Riggs authored
Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
-
Stephen Frost authored
No need to have pg_audit.conf any longer since the regression tests are just loading the module at the start of each session (to simulate being in shared_preload_libraries, which isn't something we can actually make happen on the buildfarm itself, it seems). Pointed out by Tom
-
Simon Riggs authored
Refactoring ahead of tablesample patch Requested and reviewed by Michael Paquier Petr Jelinek
-
- May 14, 2015
-
-
Stephen Frost authored
In pg_audit, set client_min_messages up to warning, then reset the role attributes, to completely reset the session while not making the regression tests depend on being run by any particular user.
-
Stephen Frost authored
Instead of creating a new superuser role, extract out what the current user is and use that user instead. Further, clean up and drop all objects created by the regression test. Pointed out by Tom.
-
Tom Lane authored
"%ld" is not a portable way to print int64's. This may explain the buildfarm crashes we're seeing --- it seems to make dromedary happy, at least.
-
Tom Lane authored
-
Stephen Frost authored
Also, use a function to load the extension ahead of all other calls, simulating load from shared_libraries_preload, to make sure the hooks are in place before logging start.
-
Stephen Frost authored
The database built by the buildfarm is specific to the extension, use \connect - instead.
-
Stephen Frost authored
Remove the check that pg_audit be installed by shared_preload_libraries as that's not going to work when running the regressions tests in the buildfarm. That check was primairly a nice to have and isn't required anyway.
-
Stephen Frost authored
This extension provides detailed logging classes, ability to control logging at a per-object level, and includes fully-qualified object names for logged statements (DML and DDL) in independent fields of the log output. Authors: Ian Barwick, Abhijit Menon-Sen, David Steele Reviews by: Robert Haas, Tatsuo Ishii, Sawada Masahiko, Fujii Masao, Simon Riggs Discussion with: Josh Berkus, Jaime Casanova, Peter Eisentraut, David Fetter, Yeb Havinga, Alvaro Herrera, Petr Jelinek, Tom Lane, MauMau, Bruce Momjian, Jim Nasby, Michael Paquier, Fabrízio de Royes Mello, Neil Tiffin
-
- May 13, 2015
-
-
Tom Lane authored
If a postgres_fdw foreign table is a non-locked source relation in an UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the query selects its ctid column, the wrong value would be returned if an EvalPlanQual recheck occurred. This happened because the foreign table's result row was copied via the ROW_MARK_COPY code path, and EvalPlanQualFetchRowMarks just unconditionally set the reconstructed tuple's t_self to "invalid". To fix that, we can have EvalPlanQualFetchRowMarks copy the composite datum's t_ctid field, and be sure to initialize that along with t_self when postgres_fdw constructs a tuple to return. If we just did that much then EvalPlanQualFetchRowMarks would start returning "(0,0)" as ctid for all other ROW_MARK_COPY cases, which perhaps does not matter much, but then again maybe it might. The cause of that is that heap_form_tuple, which is the ultimate source of all composite datums, simply leaves t_ctid as zeroes in newly constructed tuples. That seems like a bad idea on general principles: a field that's really not been initialized shouldn't appear to have a valid value. So let's eat the trivial additional overhead of doing "ItemPointerSetInvalid(&(td->t_ctid))" in heap_form_tuple. This closes out our handling of Etsuro Fujita's report that tableoid and ctid weren't correctly set in postgres_fdw EvalPlanQual cases. Along the way we did a great deal of work to improve FDWs' ability to control row locking behavior; which was not wasted effort by any means, but it didn't end up being a fix for this problem because that feature would be too expensive for postgres_fdw to use all the time. Although the fix for the tableoid misbehavior was back-patched, I'm hesitant to do so here; it seems far less likely that people would care about remote ctid than tableoid, and even such a minor behavioral change as this in heap_form_tuple is perhaps best not back-patched. So commit to HEAD only, at least for the moment. Etsuro Fujita, with some adjustments by me
-
Andres Freund authored
The new function allows to estimate bloat and other table level statics in a faster, but approximate, way. It does so by using information from the free space map for pages marked as all visible in the visibility map. The rest of the table is actually read and free space/bloat is measured accurately. In many cases that allows to get bloat information much quicker, causing less IO. Author: Abhijit Menon-Sen Reviewed-By: Andres Freund, Amit Kapila and Tomas Vondra Discussion: 20140402214144.GA28681@kea.toroid.org
-