- Mar 28, 2016
-
-
Tom Lane authored
- Feb 29, 2016
-
-
Alvaro Herrera authored
The docs were advising to use /usr/local/pgsql/man instead, but that's wrong. Reported-By: Slawomir Sudnik Backpatch-To: 9.1 Bug: #13894
-
- Feb 21, 2016
-
-
Tatsuo Ishii authored
With suggentions from Tom Lane.
-
- Feb 18, 2016
-
-
Tom Lane authored
Dead or half-dead index leaf pages were incorrectly reported as live, as a consequence of a code rearrangement I made (during a moment of severe brain fade, evidently) in commit d287818e. The index metapage was not counted in index_size, causing that result to not agree with the actual index size on-disk. Index root pages were not counted in internal_pages, which is inconsistent compared to the case of a root that's also a leaf (one-page index), where the root would be counted in leaf_pages. Aside from that inconsistency, this could lead to additional transient discrepancies between the reported page counts and index_size, since it's possible for pgstatindex's scan to see zero or multiple pages marked as BTP_ROOT, if the root moves due to a split during the scan. With these fixes, index_size will always be exactly one page more than the sum of the displayed page counts. Also, the index_size result was incorrectly documented as being measured in pages; it's always been measured in bytes. (While fixing that, I couldn't resist doing some small additional wordsmithing on the pgstattuple docs.) Including the metapage causes the reported index_size to not be zero for an empty index. To preserve the desired property that the pgstattuple regression test results are platform-independent (ie, BLCKSZ configuration independent), scale the index_size result in the regression tests. The documentation issue was reported by Otsuka Kenji, and the inconsistent root page counting by Peter Geoghegan; the other problems noted by me. Back-patch to all supported branches, because this has been broken for a long time.
-
- Feb 16, 2016
-
-
Tom Lane authored
Clarify the description of which transactions will block a CREATE INDEX CONCURRENTLY command from proceeding, and mention that the index might still not be usable after CREATE INDEX completes. (This happens if the index build detected broken HOT chains, so that pg_index.indcheckxmin gets set, and there are open old transactions preventing the xmin horizon from advancing past the index's initial creation. I didn't want to explain what broken HOT chains are, though, so I omitted an explanation of exactly when old transactions prevent the index from being used.) Per discussion with Chris Travers. Back-patch to all supported branches, since the same text appears in all of them.
-
Tatsuo Ishii authored
Change "In this case" to "In the example above" to clarify what it actually refers to.
-
Fujii Masao authored
In runtime.sgml, the old formulas for calculating the reasonable values of SEMMNI and SEMMNS were incorrect. They have forgotten to count the number of semaphores which both the checkpointer process (introduced in 9.2) and the background worker processes (introduced in 9.3) need. This commit fixes those formulas so that they count the number of semaphores which the checkpointer process and the background worker processes need. Report and patch by Kyotaro Horiguchi. Only the patch for 9.3 was modified by me. Back-patch to 9.2 where the checkpointer process was added and the number of needed semaphores was increased. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Backpatch: 9.2 Discussion: http://www.postgresql.org/message-id/20160203.125119.66820697.horiguchi.kyotaro@lab.ntt.co.jp
-
- Feb 11, 2016
-
-
Noah Misch authored
Many automated test suites call pg_ctl. Buildfarm members axolotl, hornet, mandrill, shearwater, sungazer and tern have failed when server shutdown took longer than the pg_ctl default 60s timeout. This addition permits slow hosts to easily raise the timeout without us editing a --timeout argument into every test suite pg_ctl call. Back-patch to 9.1 (all supported versions) for the sake of automated testing. Reviewed by Tom Lane.
-
- Feb 08, 2016
-
-
Tom Lane authored
Security: CVE-2016-0773
- Feb 07, 2016
-
-
Tom Lane authored
Get rid of the false implication that PRIMARY KEY is exactly equivalent to UNIQUE + NOT NULL. That was more-or-less true at one time in our implementation, but the standard doesn't say that, and we've grown various features (many of them required by spec) that treat a pkey differently from less-formal constraints. Per recent discussion on pgsql-general. I failed to resist the temptation to do some other wordsmithing in the same area.
-
Tom Lane authored
-
- Jan 31, 2016
-
-
Andrew Dunstan authored
Error reported by Igal Sapir.
-
- Jan 02, 2016
-
-
Tom Lane authored
As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist in 9.0-9.3, making the release note text not very useful. Instead make it talk about recovery_target_xid, which did exist then. 9.0 is already out of support, but we can fix the text in the newer branches' copies of its release notes.
-
Bruce Momjian authored
Backpatch certain files through 9.1
-
- Dec 28, 2015
-
-
Tom Lane authored
Common mathematical convention is that exponentiation associates right to left. We aren't going to change the parser for this, but we could note it in the operator's description. (It's already noted in the operator precedence/associativity table, but users might not look there.) Per bug #13829 from Henrik Pauli.
-
- Dec 14, 2015
-
-
Tom Lane authored
This has worked that way for a long time, maybe always, but you would not have known it from the documentation. Also back-patch the notes I added to HEAD earlier today about behavior of the "-f -" switch, which likewise have been valid for many releases.
-
- Dec 13, 2015
-
-
Tom Lane authored
Paul Ramsey
-
- Dec 04, 2015
-
-
Tom Lane authored
In commit 1ea0c73c I added a section to user-manag.sgml about how to drop roles that own objects; but as pointed out by Stephen Frost, I neglected that shared objects (databases or tablespaces) may need special treatment. Fix that. Back-patch to supported versions, like the previous patch.
-
- Nov 22, 2015
-
-
Tom Lane authored
The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
-
- Nov 10, 2015
-
-
Tom Lane authored
In commit a5ec86a7 I wrote a quick hack that reduced the number of TeX string pool entries created while converting our documentation to PDF form. That held the fort for awhile, but as of HEAD we're back up against the same limitation. It turns out that the original coding of \FlowObjectSetup actually results in *three* string pool entries being generated for every "flow object" (that is, potential cross-reference target) in the documentation, and my previous hack only got rid of one of them. With a little more care, we can reduce the string count to one per flow object plus one per actually-cross-referenced flow object (about 115000 + 5000 as of current HEAD); that should work until the documentation volume roughly doubles from where it is today. As a not-incidental side benefit, this change also causes pdfjadetex to stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file. It had been making one willy-nilly for every flow object; now it's just one per actually-cross-referenced object. This results in close to a 2X savings in PDF file size. We will still want to run the output through "jpdftweak" to get it to be compressed; but we no longer need removal of unreferenced bookmarks, so we might be able to find a quicker tool for that step. Although the failure only affects HEAD and US-format output at the moment, 9.5 cannot be more than a few pages short of failing likewise, so it will inevitably fail after a few rounds of minor-version release notes. I don't have a lot of faith that we'll never hit the limit in the older branches; and anyway it would be nice to get rid of jpdftweak across the board. Therefore, back-patch to all supported branches.
-
- Oct 07, 2015
-
-
Tom Lane authored
In general one may have to run both REASSIGN OWNED and DROP OWNED to get rid of all the dependencies of a role to be dropped. This was alluded to in the REASSIGN OWNED man page, but not really spelled out in full; and in any case the procedure ought to be documented in a more prominent place than that. Add a section to the "Database Roles" chapter explaining this, and do a bit of wordsmithing in the relevant commands' man pages.
-
- Oct 05, 2015
-
-
Peter Eisentraut authored
-
Tom Lane authored
Add entries for security and not-quite-security issues. Security: CVE-2015-5288, CVE-2015-5289
-
Andres Freund authored
The documentation for the autovacuum_multixact_freeze_max_age and autovacuum_freeze_max_age relation level parameters contained: "Note that while you can set autovacuum_multixact_freeze_max_age very small, or even zero, this is usually unwise since it will force frequent vacuuming." which hasn't been true since these options were made relation options, instead of residing in the pg_autovacuum table (834a6da4). Remove the outdated sentence. Even the lowered limits from 2596d705 are high enough that this doesn't warrant calling out the risk in the CREATE TABLE docs. Per discussion with Tom Lane and Alvaro Herrera Discussion: 26377.1443105453@sss.pgh.pa.us Backpatch: 9.0- (in parts)
-
Tom Lane authored
- Oct 02, 2015
-
-
Tom Lane authored
It's not terribly hard to devise regular expressions that take large amounts of time and/or memory to process. Recent testing by Greg Stark has also shown that machines with small stack limits can be driven to stack overflow by suitably crafted regexps. While we intend to fix these things as much as possible, it's probably impossible to eliminate slow-execution cases altogether. In any case we don't want to treat such things as security issues. The history of that code should already discourage prudent DBAs from allowing execution of regexp patterns coming from possibly-hostile sources, but it seems like a good idea to warn about the hazard explicitly. Currently, similar_escape() allows access to enough of the underlying regexp behavior that the warning has to apply to SIMILAR TO as well. We might be able to make it safer if we tightened things up to allow only SQL-mandated capabilities in SIMILAR TO; but that would be a subtly non-backwards-compatible change, so it requires discussion and probably could not be back-patched. Per discussion among pgsql-security list.
-
- Sep 22, 2015
-
-
Tom Lane authored
Per bug #13631 from KOIZUMI Satoru.
-
- Sep 16, 2015
-
-
Tom Lane authored
The docs claimed that \uhhhh would be interpreted as a Unicode value regardless of the database encoding, but it's never been implemented that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely the character that pg_mb2wchar translates to 0xhhhh. Moreover we were falsely dismissive of the usefulness of Unicode code points above FFFF. Fix that. It's been like this for ages, so back-patch to all supported branches.
-
- Sep 07, 2015
-
-
Teodor Sigaev authored
-
- Aug 27, 2015
-
-
Bruce Momjian authored
This makes the parameter names match the documented prototype names. Report by Erwin Brandstetter Backpatch through 9.0
-
- Aug 26, 2015
-
-
Tom Lane authored
The default argument, if given, has to be of exactly the same datatype as the first argument; but this was not stated in so many words, and the error message you get about it might not lead your thought in the right direction. Per bug #13587 from Robert McGehee. A quick scan says that these are the only two built-in functions with two anyelement arguments and no other polymorphic arguments. There are plenty of cases of, eg, anyarray and anyelement, but those seem less likely to confuse. For instance this doesn't seem terribly hard to figure out: "function array_remove(integer[], numeric) does not exist". So I've contented myself with fixing these two cases.
-
- Aug 15, 2015
-
-
Tom Lane authored
The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same way as TRUNCATE, because they replace all rows of the table with newly-made rows with a new xmin. (Ideally, concurrent transactions with old snapshots would continue to see the old table contents, but the data is not there anymore --- and if it were there, it would be inconsistent with the table's updated rowtype, so there would be serious implementation problems to fix.) This was nowhere documented though, and the problem was only documented for TRUNCATE in a note in the TRUNCATE reference page. Create a new "Caveats" section in the MVCC chapter that can be home to this and other limitations on serializable consistency. In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim space occupied by a dropped column. They don't reconstruct existing tuples so they couldn't do that. Back-patch to all supported branches.
-
- Aug 05, 2015
-
-
Tom Lane authored
Per discussion of bug #13538.
-
- Jul 29, 2015
-
-
Tom Lane authored
Although initdb has long discouraged use of a filesystem mount-point directory as a PG data directory, this point was covered nowhere in the user-facing documentation. Also, with the popularity of pg_upgrade, we really need to recommend that the PG user own not only the data directory but its parent directory too. (Without a writable parent directory, operations such as "mv data data.old" fail immediately. pg_upgrade itself doesn't do that, but wrapper scripts for it often do.) Hence, adjust the "Creating a Database Cluster" section to address these points. I also took the liberty of wordsmithing the discussion of NFS a bit. These considerations aren't by any means new, so back-patch to all supported branches.
-
- Jul 28, 2015
-
-
Andres Freund authored
While postgres' use of SSL renegotiation is a good idea in theory, it turned out to not work well in practice. The specification and openssl's implementation of it have lead to several security issues. Postgres' use of renegotiation also had its share of bugs. Additionally OpenSSL has a bunch of bugs around renegotiation, reported and open for years, that regularly lead to connections breaking with obscure error messages. We tried increasingly complex workarounds to get around these bugs, but we didn't find anything complete. Since these connection breakages often lead to hard to debug problems, e.g. spuriously failing base backups and significant latency spikes when synchronous replication is used, we have decided to change the default setting for ssl renegotiation to 0 (disabled) in the released backbranches and remove it entirely in 9.5 and master.. Author: Michael Paquier, with changes by me Discussion: 20150624144148.GQ4797@alap3.anarazel.de Backpatch: 9.0-9.4; 9.5 and master get a different patch
-
- Jul 10, 2015
-
-
Tom Lane authored
The documentation implied that there was seldom any reason to use the array_append, array_prepend, and array_cat functions directly. But that's not really true, because they can help make it clear which case is meant, which the || operator can't do since it's overloaded to represent all three cases. Add some discussion and examples illustrating the potentially confusing behavior that can ensue if the parser misinterprets what was meant. Per a complaint from Michael Herold. Back-patch to 9.2, which is where || started to behave this way.
-
- Jul 09, 2015
-
-
Heikki Linnakangas authored
Tom fixed another one of these in commit 7f32dbcd, but there was another almost identical one in libpq docs. Per his comment: HP's web server has apparently become case-sensitive sometime recently. Per bug #13479 from Daniel Abraham. Corrected link identified by Alvaro.
-