- Jan 15, 2015
-
-
Tom Lane authored
The folly of the previous arrangement was just demonstrated: there's no convenient way to add fields to ExplainState without breaking ABI, even if callers have no need to touch those fields. Since we might well need to do that again someday in back branches, let's change things so that only explain.c has to have sizeof(ExplainState) compiled into it. This costs one extra palloc() per EXPLAIN operation, which is surely pretty negligible.
-
Robert Haas authored
Previously, read() might have returned a length equal to the buffer length, and then the subsequent store to buf[len] would write a zero-byte one byte past the end. This doesn't seem likely to be a security issue, but there's some chance it could result in pg_standby misbehaving. Spotted by Coverity; patch by Michael Paquier, reviewed by me.
-
- Jan 14, 2015
-
-
Robert Haas authored
Spotted by Coverity. This isn't likely to matter in practice, but there's no harm in fixing it. Michael Paquier
-
- Jan 13, 2015
-
-
Heikki Linnakangas authored
Similar warnings from backend were silenced earlier by commit c8315930, but there were a few more contrib/hstore. Michael Paquier
-
- Jan 09, 2015
-
-
Bruce Momjian authored
Report by Tatsuo Ishii, Coverity
-
- Jan 06, 2015
-
-
Bruce Momjian authored
Backpatch certain files through 9.0
-
- Jan 04, 2015
-
-
Andres Freund authored
Previously the computation of the total test duration, measured in microseconds, accidentally overflowed due to accidentally using signed 32bit arithmetic. As the only consequence is that pg_test_timing invocations with such, overly large, durations never finished the practical consequences of this bug are minor. Pointed out by Coverity. Backpatch to 9.2 where pg_test_timing was added.
-
Andres Freund authored
In the unlikely case of stdin (fd 0) being closed, the off-by-one would lead to pg_xlogdump failing to open files. Spotted by Coverity. Backpatch to 9.3 where pg_xlogdump was introduced.
-
Andres Freund authored
Pointed out by Coverity. Backpatch to all supported branches, the code has been that way for a long while.
-
- Dec 30, 2014
-
-
Tatsuo Ishii authored
-
- Dec 24, 2014
-
-
Bruce Momjian authored
-
- Dec 22, 2014
-
-
Heikki Linnakangas authored
This makes the functions much nicer to read and edit, and also makes debugging easier.
-
- Dec 18, 2014
-
-
Tom Lane authored
Previously, if you wanted anything besides C-string hash keys, you had to specify a custom hashing function to hash_create(). Nearly all such callers were specifying tag_hash or oid_hash; which is tedious, and rather error-prone, since a caller could easily miss the opportunity to optimize by using hash_uint32 when appropriate. Replace this with a design whereby callers using simple binary-data keys just specify HASH_BLOBS and don't need to mess with specific support functions. hash_create() itself will take care of optimizing when the key size is four bytes. This nets out saving a few hundred bytes of code space, and offers a measurable performance improvement in tidbitmap.c (which was not exploiting the opportunity to use hash_uint32 for its 4-byte keys). There might be some wins elsewhere too, I didn't analyze closely. In future we could look into offering a similar optimized hashing function for 8-byte keys. Under this design that could be done in a centralized and machine-independent fashion, whereas getting it right for keys of platform-dependent sizes would've been notationally painful before. For the moment, the old way still works fine, so as not to break source code compatibility for loadable modules. Eventually we might want to remove tag_hash and friends from the exported API altogether, since there's no real need for them to be explicitly referenced from outside dynahash.c. Teodor Sigaev and Tom Lane
-
Noah Misch authored
Use SSPI authentication to allow connections exclusively from the OS user that launched the test suite. This closes on Windows the vulnerability that commit be76a6d3 closed on other platforms. Users of "make installcheck" or custom test harnesses can run "pg_regress --config-auth=DATADIR" to activate the same authentication configuration that "make check" would use. Back-patch to 9.0 (all supported versions). Security: CVE-2014-0067
-
- Dec 17, 2014
-
-
Tom Lane authored
As with NOT NULL constraints, we consider that such constraints are merely reports of constraints that are being enforced by the remote server (or other underlying storage mechanism). Their only real use is to allow planner optimizations, for example in constraint-exclusion checks. Thus, the code changes here amount to little more than removal of the error that was formerly thrown for applying CHECK to a foreign table. (In passing, do a bit of cleanup of the ALTER FOREIGN TABLE reference page, which had accumulated some weird decisions about ordering etc.) Shigeru Hanada and Etsuro Fujita, reviewed by Kyotaro Horiguchi and Ashutosh Bapat.
-
Magnus Hagander authored
Add Windows versions of generated scripts, and make sure we only ignore the scripts int he root directory. Michael Paquier
-
- Dec 16, 2014
-
-
Tom Lane authored
Code added in 9.4 would attempt to divide by zero in such cases. Noted while testing fix for missing-pclose problem.
-
Tom Lane authored
If the called command fails to return data, runShellCommand forgot to pclose() the pipe before returning. This is fairly harmless in the current code, because pgbench would then abandon further processing of that client thread; so no more than nclients descriptors could be leaked this way. But it's not hard to imagine future improvements whereby that wouldn't be true. In any case, it's sloppy coding, so patch all branches. Found by Coverity.
-
- Dec 12, 2014
-
-
Tom Lane authored
In commit 462bd957, I changed postgres_fdw to rely on get_plan_rowmark() instead of get_parse_rowmark(). I still think that's a good idea in the long run, but as Etsuro Fujita pointed out, it doesn't work today because planner.c forces PlanRowMarks to have markType = ROW_MARK_COPY for all foreign tables. There's no urgent reason to change this in the back branches, so let's just revert that part of yesterday's commit rather than trying to design a better solution under time pressure. Also, add a regression test case showing what postgres_fdw does with FOR UPDATE/SHARE. I'd blithely assumed there was one already, else I'd have realized yesterday that this code didn't work.
-
Tom Lane authored
Ordinarily we can omit checking of a WHERE condition that matches a partial index's condition, when we are using an indexscan on that partial index. However, in SELECT FOR UPDATE we must include the "redundant" filter condition in the plan so that it gets checked properly in an EvalPlanQual recheck. The planner got this mostly right, but improperly omitted the filter condition if the index in question was on an inheritance child table. In READ COMMITTED mode, this could result in incorrectly returning just-updated rows that no longer satisfy the filter condition. The cause of the error is using get_parse_rowmark() when get_plan_rowmark() is what should be used during planning. In 9.3 and up, also fix the same mistake in contrib/postgres_fdw. It's currently harmless there (for lack of inheritance support) but wrong is wrong, and the incorrect code might get copied to someplace where it's more significant. Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
-
- Dec 09, 2014
-
-
Alvaro Herrera authored
Author: Michael Paquier
-
- Dec 05, 2014
-
-
Heikki Linnakangas authored
The old formula was completely bogus with the new WAL record format.
-
- Dec 04, 2014
-
-
Peter Eisentraut authored
dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq first. This doesn't work in a PGXS build, because there is no libpq to build. So just omit setting SHLIB_PREREQS in this case. Note that PGXS users can still use SHLIB_PREREQS (although it is not documented). The problem here is only that contrib modules can be built in-tree or using PGXS, and the prerequisite is only applicable in the former case. Commit 6697aa2b previously attempted to address this by creating a somewhat fake submake-libpq target in Makefile.global. That was not the right fix, and it was also done in a nonportable way, so revert that.
-
- Dec 03, 2014
-
-
Alvaro Herrera authored
Transactions can now set their commit timestamp directly as they commit, or an external transaction commit timestamp can be fed from an outside system using the new function TransactionTreeSetCommitTsData(). This data is crash-safe, and truncated at Xid freeze point, same as pg_clog. This module is disabled by default because it causes a performance hit, but can be enabled in postgresql.conf requiring only a server restart. A new test in src/test/modules is included. Catalog version bumped due to the new subdirectory within PGDATA and a couple of new SQL functions. Authors: Álvaro Herrera and Petr Jelínek Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven Singer, Peter Eisentraut
-
- Dec 02, 2014
-
-
Andres Freund authored
The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag PGXACT flag, that allows such backends to be skipped when computing the xmin horizon/snapshots. That's fine and sensible for walsenders streaming out logical changes, but not at all fine for SQL backends doing logical decoding. If the latter set that flag any change they have performed outside of logical decoding will not be regarded as visible - which e.g. can lead to that change being vacuumed away. Note that not setting the flag for SQL backends isn't particularly bothersome - the SQL backend doesn't do streaming, so it only runs for a limited amount of time. Per buildfarm member 'tick' and Alvaro. Backpatch to 9.4, where logical decoding was introduced.
-
Alvaro Herrera authored
Michael Paquier Double-dash additions suggested by Peter Geoghegan
-
- Dec 01, 2014
-
-
Andrew Dunstan authored
We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
-
- Nov 30, 2014
-
-
Alvaro Herrera authored
This is advance preparation for introducing even more test modules; the easy solution is to add them to contrib, but that's bloated enough that it seems a good time to think of something different. Moved modules are dummy_seclabel, test_shm_mq, test_parser and worker_spi. (test_decoding was also a candidate, but there was too much opposition to moving that one. We can always reconsider later.)
-
- Nov 28, 2014
-
-
Tom Lane authored
This patch adds a way of iterating through the members of a bitmapset nondestructively, unlike the old way with bms_first_member(). While bms_next_member() is very slightly slower than bms_first_member() (at least for typical-size bitmapsets), eliminating the need to palloc and pfree a temporary copy of the target bitmapset is a significant win. So this method should be preferred in all cases where a temporary copy would be necessary. Tom Lane, with suggestions from Dean Rasheed and David Rowley
-
- Nov 27, 2014
-
-
Tom Lane authored
Mark Simonetti reported that libxslt sometimes crashes for him, and that swapping xslt_process's object-freeing calls around to do them in reverse order of creation seemed to fix it. I've not reproduced the crash, but valgrind clearly shows a reference to already-freed memory, which is consistent with the idea that shutdown of the xsltTransformContext is trying to reference the already-freed stylesheet or input document. With this patch, valgrind is no longer unhappy. I have an inquiry in to see if this is a libxslt bug or if we're just abusing the library; but even if it's a library bug, we'd want to adjust our code so it doesn't fail with unpatched libraries. Back-patch to all supported branches, because we've been doing this in the wrong(?) order for a long time.
-
- Nov 25, 2014
-
-
Heikki Linnakangas authored
Code that check the flag no longer need #ifdef's, which is more convenient. In particular, makes it easier to write extensions that depend on it. In the passing, modify sslinfo's ssl_is_used function to check ssl_in_use instead of the OpenSSL specific 'ssl' pointer. It doesn't make any difference currently, as sslinfo is only compiled when built with OpenSSL, but seems cleaner anyway.
-
- Nov 24, 2014
-
-
Robert Haas authored
This is further infrastructure for parallelism. Amit Khandekar, Noah Misch, Robert Haas
-
- Nov 22, 2014
-
-
Tom Lane authored
postgres_fdw would send query conditions involving system columns to the remote server, even though it makes no effort to ensure that system columns other than CTID match what the remote side thinks. tableoid, in particular, probably won't match and might have some use in queries. Hence, prevent sending conditions that include non-CTID system columns. Also, create_foreignscan_plan neglected to check local restriction conditions while determining whether to set fsSystemCol for a foreign scan plan node. This again would bollix the results for queries that test a foreign table's tableoid. Back-patch the first fix to 9.3 where postgres_fdw was introduced. Back-patch the second to 9.2. The code is probably broken in 9.1 as well, but the patch doesn't apply cleanly there; given the weak state of support for FDWs in 9.1, it doesn't seem worth fixing. Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
-
- Nov 21, 2014
-
-
Heikki Linnakangas authored
Patch by me, Peter Geoghegan and Michael Paquier, reviewed by Amit Kapila.
-
- Nov 20, 2014
-
-
Heikki Linnakangas authored
Each WAL record now carries information about the modified relation and block(s) in a standardized format. That makes it easier to write tools that need that information, like pg_rewind, prefetching the blocks to speed up recovery, etc. There's a whole new API for building WAL records, replacing the XLogRecData chains used previously. The new API consists of XLogRegister* functions, which are called for each buffer and chunk of data that is added to the record. The new API also gives more control over when a full-page image is written, by passing flags to the XLogRegisterBuffer function. This also simplifies the XLogReadBufferForRedo() calls. The function can dig the relation and block number from the WAL record, so they no longer need to be passed as arguments. For the convenience of redo routines, XLogReader now disects each WAL record after reading it, copying the main data part and the per-block data into MAXALIGNed buffers. The data chunks are not aligned within the WAL record, but the redo routines can assume that the pointers returned by XLogRecGet* functions are. Redo routines are now passed the XLogReaderState, which contains the record in the already-disected format, instead of the plain XLogRecord. The new record format also makes the fixed size XLogRecord header smaller, by removing the xl_len field. The length of the "main data" portion is now stored at the end of the WAL record, and there's a separate header after XLogRecord for it. The alignment padding at the end of XLogRecord is also removed. This compansates for the fact that the new format would otherwise be more bulky than the old format. Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera, Fujii Masao.
-
- Nov 19, 2014
-
-
Robert Haas authored
This can cause problems on Windows, where files that are still open can't be unlinked. Jeff Janes
-
- Nov 15, 2014
-
-
Alvaro Herrera authored
-
- Nov 13, 2014
-
-
Andres Freund authored
There are basically three situations in which logical decoding needs to perform cache invalidation. During/After replaying a transaction with catalog changes, when skipping a uninteresting transaction that performed catalog changes and when erroring out while replaying a transaction. Unfortunately these three cases were all done slightly differently - partially because 8de3e410, which greatly simplifies matters, got committed in the midst of the development of logical decoding. The actually problematic case was when logical decoding skipped transaction commits (and thus processed invalidations). When used via the SQL interface cache invalidation could access the catalog - bad, because we didn't set up enough state to allow that correctly. It'd not be hard to setup sufficient state, but the simpler solution is to always perform cache invalidation outside a valid transaction. Also make the different cache invalidation cases look as similar as possible, to ease code review. This fixes the assertion failure reported by Antonin Houska in 53EE02D9.7040702@gmail.com. The presented testcase has been expanded into a regression test. Backpatch to 9.4, where logical decoding was introduced.
-
Robert Haas authored
The hope is that we can use this to produce better diagnostics in some cases. Peter Geoghegan, reviewed by Michael Paquier, with some further changes by me.
-
- Nov 12, 2014
-
-
Andres Freund authored
Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and snapbuild.c were missing the FIN_CRC32 call when computing/checking checksums of on disk files. That doesn't lower the the error detection capabilities of the checksum, but is inconsistent with other usages. In a followup mail Heikki also noticed that, contrary to a comment, the 'version' and 'length' struct fields of replication slot's on disk data where not covered by the checksum. That's not likely to lead to actually missed corruption as those fields are cross checked with the expected version and the actual file length. But it's wrong nonetheless. As fixing these issues makes existing on disk files unreadable, bump the expected versions of on disk files for both slots and logical decoding historic catalog snapshots. This means that loading old files will fail with ERROR: "replication slot file ... has unsupported version 1" and ERROR: "snapbuild state file ... has unsupported version 1 instead of 2" respectively. Given the low likelihood of anybody already using these new features in a production setup that seems acceptable. Fixing these issues made me notice that there's no regression test covering the loading of historic snapshot from disk - so add one. Backpatch to 9.4 where these features were introduced.
-