- Dec 04, 2015
-
-
Tom Lane authored
In commit 1ea0c73c I added a section to user-manag.sgml about how to drop roles that own objects; but as pointed out by Stephen Frost, I neglected that shared objects (databases or tablespaces) may need special treatment. Fix that. Back-patch to supported versions, like the previous patch.
-
- Nov 22, 2015
-
-
Tom Lane authored
The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
-
- Nov 10, 2015
-
-
Tom Lane authored
In commit a5ec86a7 I wrote a quick hack that reduced the number of TeX string pool entries created while converting our documentation to PDF form. That held the fort for awhile, but as of HEAD we're back up against the same limitation. It turns out that the original coding of \FlowObjectSetup actually results in *three* string pool entries being generated for every "flow object" (that is, potential cross-reference target) in the documentation, and my previous hack only got rid of one of them. With a little more care, we can reduce the string count to one per flow object plus one per actually-cross-referenced flow object (about 115000 + 5000 as of current HEAD); that should work until the documentation volume roughly doubles from where it is today. As a not-incidental side benefit, this change also causes pdfjadetex to stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file. It had been making one willy-nilly for every flow object; now it's just one per actually-cross-referenced object. This results in close to a 2X savings in PDF file size. We will still want to run the output through "jpdftweak" to get it to be compressed; but we no longer need removal of unreferenced bookmarks, so we might be able to find a quicker tool for that step. Although the failure only affects HEAD and US-format output at the moment, 9.5 cannot be more than a few pages short of failing likewise, so it will inevitably fail after a few rounds of minor-version release notes. I don't have a lot of faith that we'll never hit the limit in the older branches; and anyway it would be nice to get rid of jpdftweak across the board. Therefore, back-patch to all supported branches.
-
- Oct 07, 2015
-
-
Tom Lane authored
In general one may have to run both REASSIGN OWNED and DROP OWNED to get rid of all the dependencies of a role to be dropped. This was alluded to in the REASSIGN OWNED man page, but not really spelled out in full; and in any case the procedure ought to be documented in a more prominent place than that. Add a section to the "Database Roles" chapter explaining this, and do a bit of wordsmithing in the relevant commands' man pages.
-
- Oct 05, 2015
-
-
Peter Eisentraut authored
-
Tom Lane authored
Add entries for security and not-quite-security issues. Security: CVE-2015-5288, CVE-2015-5289
-
Andres Freund authored
The documentation for the autovacuum_multixact_freeze_max_age and autovacuum_freeze_max_age relation level parameters contained: "Note that while you can set autovacuum_multixact_freeze_max_age very small, or even zero, this is usually unwise since it will force frequent vacuuming." which hasn't been true since these options were made relation options, instead of residing in the pg_autovacuum table (834a6da4). Remove the outdated sentence. Even the lowered limits from 2596d705 are high enough that this doesn't warrant calling out the risk in the CREATE TABLE docs. Per discussion with Tom Lane and Alvaro Herrera Discussion: 26377.1443105453@sss.pgh.pa.us Backpatch: 9.0- (in parts)
-
Tom Lane authored
-
- Oct 02, 2015
-
-
Tom Lane authored
It's not terribly hard to devise regular expressions that take large amounts of time and/or memory to process. Recent testing by Greg Stark has also shown that machines with small stack limits can be driven to stack overflow by suitably crafted regexps. While we intend to fix these things as much as possible, it's probably impossible to eliminate slow-execution cases altogether. In any case we don't want to treat such things as security issues. The history of that code should already discourage prudent DBAs from allowing execution of regexp patterns coming from possibly-hostile sources, but it seems like a good idea to warn about the hazard explicitly. Currently, similar_escape() allows access to enough of the underlying regexp behavior that the warning has to apply to SIMILAR TO as well. We might be able to make it safer if we tightened things up to allow only SQL-mandated capabilities in SIMILAR TO; but that would be a subtly non-backwards-compatible change, so it requires discussion and probably could not be back-patched. Per discussion among pgsql-security list.
-
- Sep 22, 2015
-
-
Tom Lane authored
Per bug #13631 from KOIZUMI Satoru.
-
- Sep 16, 2015
-
-
Tom Lane authored
The docs claimed that \uhhhh would be interpreted as a Unicode value regardless of the database encoding, but it's never been implemented that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely the character that pg_mb2wchar translates to 0xhhhh. Moreover we were falsely dismissive of the usefulness of Unicode code points above FFFF. Fix that. It's been like this for ages, so back-patch to all supported branches.
-
- Sep 07, 2015
-
-
Teodor Sigaev authored
-
- Aug 27, 2015
-
-
Bruce Momjian authored
This makes the parameter names match the documented prototype names. Report by Erwin Brandstetter Backpatch through 9.0
-
- Aug 26, 2015
-
-
Tom Lane authored
The default argument, if given, has to be of exactly the same datatype as the first argument; but this was not stated in so many words, and the error message you get about it might not lead your thought in the right direction. Per bug #13587 from Robert McGehee. A quick scan says that these are the only two built-in functions with two anyelement arguments and no other polymorphic arguments. There are plenty of cases of, eg, anyarray and anyelement, but those seem less likely to confuse. For instance this doesn't seem terribly hard to figure out: "function array_remove(integer[], numeric) does not exist". So I've contented myself with fixing these two cases.
-
- Aug 15, 2015
-
-
Tom Lane authored
The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same way as TRUNCATE, because they replace all rows of the table with newly-made rows with a new xmin. (Ideally, concurrent transactions with old snapshots would continue to see the old table contents, but the data is not there anymore --- and if it were there, it would be inconsistent with the table's updated rowtype, so there would be serious implementation problems to fix.) This was nowhere documented though, and the problem was only documented for TRUNCATE in a note in the TRUNCATE reference page. Create a new "Caveats" section in the MVCC chapter that can be home to this and other limitations on serializable consistency. In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim space occupied by a dropped column. They don't reconstruct existing tuples so they couldn't do that. Back-patch to all supported branches.
-
- Aug 05, 2015
-
-
Tom Lane authored
Per discussion of bug #13538.
-
- Jul 29, 2015
-
-
Tom Lane authored
Although initdb has long discouraged use of a filesystem mount-point directory as a PG data directory, this point was covered nowhere in the user-facing documentation. Also, with the popularity of pg_upgrade, we really need to recommend that the PG user own not only the data directory but its parent directory too. (Without a writable parent directory, operations such as "mv data data.old" fail immediately. pg_upgrade itself doesn't do that, but wrapper scripts for it often do.) Hence, adjust the "Creating a Database Cluster" section to address these points. I also took the liberty of wordsmithing the discussion of NFS a bit. These considerations aren't by any means new, so back-patch to all supported branches.
-
- Jul 28, 2015
-
-
Andres Freund authored
While postgres' use of SSL renegotiation is a good idea in theory, it turned out to not work well in practice. The specification and openssl's implementation of it have lead to several security issues. Postgres' use of renegotiation also had its share of bugs. Additionally OpenSSL has a bunch of bugs around renegotiation, reported and open for years, that regularly lead to connections breaking with obscure error messages. We tried increasingly complex workarounds to get around these bugs, but we didn't find anything complete. Since these connection breakages often lead to hard to debug problems, e.g. spuriously failing base backups and significant latency spikes when synchronous replication is used, we have decided to change the default setting for ssl renegotiation to 0 (disabled) in the released backbranches and remove it entirely in 9.5 and master.. Author: Michael Paquier, with changes by me Discussion: 20150624144148.GQ4797@alap3.anarazel.de Backpatch: 9.0-9.4; 9.5 and master get a different patch
-
- Jul 10, 2015
-
-
Tom Lane authored
The documentation implied that there was seldom any reason to use the array_append, array_prepend, and array_cat functions directly. But that's not really true, because they can help make it clear which case is meant, which the || operator can't do since it's overloaded to represent all three cases. Add some discussion and examples illustrating the potentially confusing behavior that can ensue if the parser misinterprets what was meant. Per a complaint from Michael Herold. Back-patch to 9.2, which is where || started to behave this way.
-
- Jul 09, 2015
-
-
Heikki Linnakangas authored
Tom fixed another one of these in commit 7f32dbcd, but there was another almost identical one in libpq docs. Per his comment: HP's web server has apparently become case-sensitive sometime recently. Per bug #13479 from Daniel Abraham. Corrected link identified by Alvaro.
-
- Jul 06, 2015
-
-
Fujii Masao authored
The .backup file name can be passed to pg_archivecleanup even if it includes the extension which is specified in -x option. However, previously the document incorrectly warned a user not to do that. Back-patch to 9.2 where pg_archivecleanup's -x option and the warning were added.
-
- Jul 01, 2015
-
-
Tom Lane authored
HP's web server has apparently become case-sensitive sometime recently. Per bug #13479 from Daniel Abraham. Corrected link identified by Alvaro.
-
- Jun 25, 2015
-
- Jun 09, 2015
-
-
Tom Lane authored
-
- Jun 01, 2015
-
- May 20, 2015
-
-
Tom Lane authored
Revise description of CVE-2015-3166, in line with scaled-back patch. Change release date. Security: CVE-2015-3166
-
- May 18, 2015
-
-
Tom Lane authored
Add entries for security issues. Security: CVE-2015-3165 through CVE-2015-3167
-
Noah Misch authored
This has been the predominant outcome. When the output of decrypting with a wrong key coincidentally resembled an OpenPGP packet header, pgcrypto could instead report "Corrupt data", "Not text data" or "Unsupported compression algorithm". The distinct "Corrupt data" message added no value. The latter two error messages misled when the decrypted payload also exhibited fundamental integrity problems. Worse, error message variance in other systems has enabled cryptologic attacks; see RFC 4880 section "14. Security Considerations". Whether these pgcrypto behaviors are likewise exploitable is unknown. In passing, document that pgcrypto does not resist side-channel attacks. Back-patch to 9.0 (all supported versions). Security: CVE-2015-3167
-
- May 17, 2015
-
-
Tom Lane authored
-
- May 16, 2015
-
-
Tom Lane authored
I don't think "respectfully" is what was meant here ...
-
- May 14, 2015
-
-
Tom Lane authored
This encoding has characters up to 4 bytes long, not 2.
-
- May 09, 2015
-
-
Stephen Frost authored
As discussed, the default setting of include_realm=0 can be dangerous in multi-realm environments because it is then impossible to differentiate users with the same username but who are from two different realms. Recommend include_realm=1 and note that the default setting may change in a future version of PostgreSQL and therefore users may wish to explicitly set include_realm to avoid issues while upgrading.
-
- May 05, 2015
-
-
Tom Lane authored
-
- Apr 09, 2015
-
-
Magnus Hagander authored
Amit Langote
-
- Apr 06, 2015
-
-
Fujii Masao authored
Back-patch to all supported versions. Michael Paquier
-
- Apr 02, 2015
-
-
Alvaro Herrera authored
psql was already accepting conninfo strings as the first parameter in \connect, but the way it worked wasn't sane; some of the other parameters would get the previous connection's values, causing it to connect to a completely unexpected server or, more likely, not finding any server at all because of completely wrong combinations of parameters. Fix by explicitely checking for a conninfo-looking parameter in the dbname position; if one is found, use its complete specification rather than mix with the other arguments. Also, change tab-completion to not try to complete conninfo/URI-looking "dbnames" and document that conninfos are accepted as first argument. There was a weak consensus to backpatch this, because while the behavior of using the dbname as a conninfo is nowhere documented for \connect, it is reasonable to expect that it works because it does work in many other contexts. Therefore this is backpatched all the way back to 9.0. To implement this, routines previously private to libpq have been duplicated so that psql can decide what looks like a conninfo/URI string. In back branches, just duplicate the same code all the way back to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version. In master, the routines are moved to src/common and renamed. Author: David Fetter, Andrew Dunstan. Some editorialization by me (probably earning a Gierth's "Sloppy" badge in the process.) Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost, Robert Haas, Andrew Dunstan.
-
- Apr 01, 2015
-
-
Tom Lane authored
You're required to write either RANGE or ROWS to start a frame clause, but the documentation incorrectly implied this is optional. Noted by David Johnston.
-
- Mar 08, 2015
-
-
Tom Lane authored
The SGML docs claimed that 1-byte integers could be sent or received with the "isint" options, but no such behavior has ever been implemented in pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was even less in tune with reality, and the code itself used parameter names matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit of additional wordsmithing on the SGML docs while at it. Since the business about 1-byte integers is a clear documentation bug, back-patch to all supported branches.
-
- Mar 02, 2015
-
-
Stephen Frost authored
Since 9.1, we've provided extensions with a way to denote "configuration" tables- tables created by an extension which the user may modify. By marking these as "configuration" tables, the extension is asking for the data in these tables to be pg_dump'd (tables which are not marked in this way are assumed to be entirely handled during CREATE EXTENSION and are not included at all in a pg_dump). Unfortunately, pg_dump neglected to consider foreign key relationships between extension configuration tables and therefore could end up trying to reload the data in an order which would cause FK violations. This patch teaches pg_dump about these dependencies, so that the data dumped out is done so in the best order possible. Note that there's no way to handle circular dependencies, but those have yet to be seen in the wild. The release notes for this should include a caution to users that existing pg_dump-based backups may be invalid due to this issue. The data is all there, but restoring from it will require extracting the data for the configuration tables and then loading them in the correct order by hand. Discussed initially back in bug #6738, more recently brought up by Gilles Darold, who provided an initial patch which was further reworked by Michael Paquier. Further modifications and documentation updates by me. Back-patch to 9.1 where we added the concept of extension configuration tables.
-
- Feb 23, 2015
-
-
Heikki Linnakangas authored
If libpq output buffer is full, pqSendSome() function tries to drain any incoming data. This avoids deadlock, if the server e.g. sends a lot of NOTICE messages, and blocks until we read them. However, pqSendSome() only did that in blocking mode. In non-blocking mode, the deadlock could still happen. To fix, take a two-pronged approach: 1. Change the documentation to instruct that when PQflush() returns 1, you should wait for both read- and write-ready, and call PQconsumeInput() if it becomes read-ready. That fixes the deadlock, but applications are not going to change overnight. 2. In pqSendSome(), drain the input buffer before returning 1. This alleviates the problem for applications that only wait for write-ready. In particular, a slow but steady stream of NOTICE messages during COPY FROM STDIN will no longer cause a deadlock. The risk remains that the server attempts to send a large burst of data and fills its output buffer, and at the same time the client also sends enough data to fill its output buffer. The application will deadlock if it goes to sleep, waiting for the socket to become write-ready, before the server's data arrives. In practice, NOTICE messages and such that the server might be sending are usually short, so it's highly unlikely that the server would fill its output buffer so quickly. Backpatch to all supported versions.
-