- Jul 29, 2014
-
-
Heikki Linnakangas authored
There were several oversights in recovery code where COMMIT/ABORT PREPARED records were ignored: * pg_last_xact_replay_timestamp() (wasn't updated for 2PC commits) * recovery_min_apply_delay (2PC commits were applied immediately) * recovery_target_xid (recovery would not stop if the XID used 2PC) The first of those was reported by Sergiy Zuban in bug #11032, analyzed by Tom Lane and Andres Freund. The bug was always there, but was masked before commit d19bd29f, because COMMIT PREPARED always created an extra regular transaction that was WAL-logged. Backpatch to all supported versions (older versions didn't have all the features and therefore didn't have all of the above bugs).
-
- Jul 28, 2014
-
-
Fujii Masao authored
unix_socket_directories was introduced in 9.3, but the document in older versions wrongly have mentioned it. This commit replaces it with the correct older name unix_socket_directory. This is applied to only 9.2 and older supported versions. Guillaume Lelarge
-
- Jul 26, 2014
-
-
Tom Lane authored
findDependencyLoops() was not bright about cases where there are multiple dependency paths between the same two dumpable objects. In most scenarios this did not hurt us too badly; but since the introduction of section boundary pseudo-objects in commit a1ef01fe, it was possible for this code to take unreasonable amounts of time (tens of seconds on a database with a couple thousand objects), as reported in bug #11033 from Joe Van Dyk. Joe's particular problem scenario involved "pg_dump -a" mode with long chains of foreign key constraints, but I think that similar problems could arise with other situations as long as there were enough objects. To fix, add a flag array that lets us notice when we arrive at the same object again while searching from a given start object. This simple change seems to be enough to eliminate the performance problem. Back-patch to 9.1, like the patch that introduced section boundary objects.
-
- Jul 24, 2014
-
-
Robert Haas authored
Spotted by Tom Lane.
-
- Jul 23, 2014
-
-
Tom Lane authored
Break the list of available options into an <itemizedlist> instead of inline sentences. This is mostly motivated by wanting to ensure that the cross-references to the FSM and VM docs don't cross page boundaries in PDF format; but it seems to me to read more easily this way anyway. I took the liberty of editorializing a bit further while at it. Per complaint from Magnus about 9.0.18 docs not building in A4 format. Patch all active branches so we don't get blind-sided by this particular issue again in future.
-
Noah Misch authored
This is consistent with the POSIX verdict that kill() shall not report ESRCH for a zombie process. Back-patch to 9.0 (all supported versions). Test code from commit d7cdf6ee depends on it, and log messages about kill() reporting "Invalid argument" will cease to appear for this not-unexpected condition.
-
Noah Misch authored
Commit d7cdf6ee introduced a usage thereof. Back-patch to 9.0, like that commit.
-
- Jul 22, 2014
-
-
Tom Lane authored
get_raw_page tried to validate the supplied block number against RelationGetNumberOfBlocks(), which of course is only right when accessing the main fork. In most cases, the main fork is longer than the others, so that the check was too weak (allowing a lower-level error to be reported, but no real harm to be done). However, very small tables could have an FSM larger than their heap, in which case the mistake prevented access to some FSM pages. Per report from Torsten Foertsch. In passing, make the bad-block-number error into an ereport not elog (since it's certainly not an internal error); and fix sloppily maintained comment for RelationGetNumberOfBlocksInFork. This has been wrong since we invented relation forks, so back-patch to all supported branches.
-
Noah Misch authored
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL backends can crash at exit. Raise a warning during "configure" based on the compile-time OpenLDAP version number, and test the crash scenario in the dblink test suite. Back-patch to 9.0 (all supported versions).
-
Tom Lane authored
In commit 631dc390, we started to handle simple numeric timezone offsets via the zic library instead of the old CTimeZone/HasCTZSet kluge. However, we overlooked the fact that the zic code will reject UTC offsets exceeding a week (which seems a bit arbitrary, but not because it's too tight ...). This led to possibly setting session_timezone to NULL, which results in crashes in most timezone-related operations as of 9.4, and crashes in a small number of places even before that. So check for NULL return from pg_tzset_offset() and report an appropriate error message. Per bug #11014 from Duncan Gillis. Back-patch to all supported branches, like the previous patch. (Unfortunately, as of today that no longer includes 8.4.)
-
- Jul 21, 2014
-
-
Tom Lane authored
-
Peter Eisentraut authored
- Jul 20, 2014
-
-
Tom Lane authored
Rather remarkable that this has been wrong since 9.1 and nobody noticed.
-
- Jul 19, 2014
-
-
Tom Lane authored
DST law changes in Crimea, Egypt, Morocco. New zone Antarctica/Troll for Norwegian base in Queen Maud Land.
-
- Jul 18, 2014
-
-
Noah Misch authored
~/.pgpass is a sound choice everywhere, and "peer" authentication is safe on every platform it supports. Cease to recommend "trust" authentication, the safety of which is deeply configuration-specific. Back-patch to 9.0, where pg_upgrade was introduced.
-
Tom Lane authored
If pg_regcomp failed after having invoked markst/cleanst, it would leak any "struct subre" nodes it had created. (We've already detected all regex syntax errors at that point, so the only likely causes of later failure would be query cancel or out-of-memory.) To fix, make sure freesrnode knows the difference between the pre-cleanst and post-cleanst cleanup procedures. Add some documentation of this less-than-obvious point. Also, newlacon did the wrong thing with an out-of-memory failure from realloc(), so that the previously allocated array would be leaked. Both of these are pretty low-probability scenarios, but a bug is a bug, so patch all the way back. Per bug #10976 from Arthur O'Dwyer.
-
- Jul 15, 2014
-
-
Alvaro Herrera authored
Trying to reassign objects owned by a user that had text search dictionaries or configurations used to fail with: ERROR: unexpected classid 3600 or ERROR: unexpected classid 3602 Fix by adding cases for those object types in a switch in pg_shdepend.c. Both REASSIGN OWNED and text search objects go back all the way to 8.1, so backpatch to all supported branches. In 9.3 the alter-owner code was made generic, so the required change in recent branches is pretty simple; however, for 9.2 and older ones we need some additional reshuffling to enable specifying objects by OID rather than name. Text search templates and parsers are not owned objects, so there's no change required for them. Per bug #9749 reported by Michal Novotný
-
Simon Riggs authored
If walsender has xmin of standby then ensure we reset the value to 0 when we change from hot_standby_feedback=on to hot_standby_feedback=off.
-
Peter Eisentraut authored
From: Josh Kupershmidt <schmiddy@gmail.com>
-
- Jul 12, 2014
-
-
Magnus Hagander authored
Adds support for autocomplete of LC_COLLATE and LC_CTYPE to the CREATE DATABASE command in psql.
-
Tom Lane authored
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source TupleTableSlot just once per query. But if the input is coming from an Append (or, perhaps, other cases?) more than one slot might be returned over the query run. This led to "record type has not been registered" errors when a composite datum was extracted from a non-blessed slot. This bug has been there a long time; I guess it escaped notice because when dealing with subqueries the planner tends to expand whole-row Vars into RowExprs, which don't have the same problem. It is possible to trigger the problem in all active branches, though, as illustrated by the added regression test.
-
- Jul 08, 2014
-
-
Tom Lane authored
While the x output of "select x from t group by x" can be presumed unique, this does not hold for "select x, generate_series(1,10) from t group by x", because we may expand the set-returning function after the grouping step. (Perhaps that should be re-thought; but considering all the other oddities involved with SRFs in targetlists, it seems unlikely we'll change it.) Put a check in query_is_distinct_for() so it's not fooled by such cases. Back-patch to all supported branches. David Rowley
-
- Jul 07, 2014
-
-
Bruce Momjian authored
Previously, when calculations on the need for toast tables changed, pg_upgrade could not handle cases where the new cluster needed a TOAST table and the old cluster did not. (It already handled the opposite case.) This fixes the "OID mismatch" error typically generated in this case. Backpatch through 9.2
-
- Jul 02, 2014
-
-
Tom Lane authored
This function wasn't originally thought to be really user-facing, because converting a table to a view isn't something we expect people to do manually. So not all that much effort was spent on the error messages; in particular, while the code will complain that you got the column types wrong it won't say exactly what they are. But since we repurposed the code to also check compatibility of rule RETURNING lists, it's definitely user-facing. It now seems worthwhile to add errdetail messages showing exactly what the conflict is when there's a mismatch of column names or types. This is prompted by bug #10836 from Matthias Raffelsieper, which might have been forestalled if the error message had reported the wrong column type as being "record". Per Alvaro's advice, back-patch to branches before 9.4, but resist the temptation to rephrase any existing strings there. Adding new strings is not really a translation degradation; anyway having the info presented in English is better than not having it at all.
-
- Jul 01, 2014
-
-
Tom Lane authored
The output buffer size in unaccent_lexize() was calculated as input string length times pg_database_encoding_max_length(), which effectively assumes that replacement strings aren't more than one character. While that was all that we previously documented it to support, the code actually has always allowed replacement strings of arbitrary length; so if you tried to make use of longer strings, you were at risk of buffer overrun. To fix, use an expansible StringInfo buffer instead of trying to determine the maximum space needed a-priori. This would be a security issue if unaccent rules files could be installed by unprivileged users; but fortunately they can't, so in the back branches the problem can be labeled as improper configuration by a superuser. Nonetheless, a memory stomp isn't a nice way of reacting to improper configuration, so let's back-patch the fix.
-
- Jun 30, 2014
-
-
Noah Misch authored
This function continued to use it after heap_endscan() freed it. In passing, don't explicit create a strategy here. Instead, use the one created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use of a BufferAccessStrategy here was introduced.
-
- Jun 26, 2014
-
-
Tom Lane authored
When we committed a87c7291, we somehow failed to notice that it didn't merely improve plan quality for expression indexes; there were very closely related cases that failed outright with "could not find pathkey item to sort". The failing cases seem to be those where the planner was already capable of selecting a MergeAppend plan, and there was inheritance involved: the lack of appropriate eclass child members would prevent prepare_sort_from_pathkeys() from succeeding on the MergeAppend's child plan nodes for inheritance child tables. Accordingly, back-patch into 9.1 through 9.3, along with an extra regression test case covering the problem. Per trouble report from Michael Glaesemann.
-
Fujii Masao authored
7380b638 changed log_filename so that epoch was not appended to it when no format specifier is given. But the example of CSV log file name with epoch still left in log_filename document. This commit removes such obsolete example. This commit also documents the defaults of log_directory and log_filename. Backpatch to all supported versions. Christoph Berg
-
- Jun 24, 2014
-
-
Heikki Linnakangas authored
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it was still possible with default_with_oids=true. But the rest of the system, including pg_dump, isn't prepared to handle foreign tables with OIDs properly. Backpatch down to 9.1, where foreign tables were introduced. It's possible that there are databases out there that already have foreign tables with OIDs. There isn't much we can do about that, but at least we can prevent them from being created in the future. Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
-
- Jun 21, 2014
-
-
Kevin Grittner authored
By using curly braces, the template had specified that one of "NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED" was required on any CREATE TRIGGER statement, which is not accurate. Change to square brackets makes that optional. Backpatch to 9.1, where the error was introduced.
-
- Jun 20, 2014
-
-
Joe Conway authored
dblink uses a short-lived data conversion memory context. However it was not deleted when no longer needed, leading to a noticeable memory leak under some circumstances. Plug the hole, along with minor refactoring. Backpatch to 9.2 where the leak was introduced. Report and initial patch by MauMau. Reviewed/modified slightly by Tom Lane and me.
-
Tom Lane authored
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM in the query-lifespan memory context. This is insignificant in simple cases where the function relation is scanned only once; but if the function is in a sub-SELECT or is on the inside of a nested loop, any memory consumed during argument evaluation can add up quickly. (The potential for trouble here had been foreseen long ago, per existing comments; but we'd not previously seen a complaint from the field about it.) To fix, create an additional temporary context just for this purpose. Per an example from MauMau. Back-patch to all active branches.
-
- Jun 14, 2014
-
-
Noah Misch authored
Commit 453a5d91 made it available to the src/test/regress build of pg_regress, but all pg_regress builds need the same treatment. Patch 9.2 through 8.4; in 9.3 and later, pg_regress gets pqsignal() via libpgport.
-
Noah Misch authored
Any OS user able to access the socket can connect as the bootstrap superuser and proceed to execute arbitrary code as the OS user running the test. Protect against that by placing the socket in a temporary, mode-0700 subdirectory of /tmp. The pg_regress-based test suites and the pg_upgrade test suite were vulnerable; the $(prove_check)-based test suites were already secure. Back-patch to 8.4 (all supported versions). The hazard remains wherever the temporary cluster accepts TCP connections, notably on Windows. As a convenient side effect, this lets testing proceed smoothly in builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values like /var/run/postgresql are often unwritable to the build user. Security: CVE-2014-0067
-
Noah Misch authored
This function is pervasive on free software operating systems; import NetBSD's implementation. Back-patch to 8.4, like the commit that will harness it.
-
- Jun 13, 2014
-
-
Tom Lane authored
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch of COMMENT commands into a single BLOB COMMENTS archive object. With sufficiently many such comments, some of the commands would likely get split across bufferloads when restoring, causing failures in direct-to-database restores (though no problem would be evident in text output). This is the same type of issue we have with table data dumped as INSERT commands, and it can be fixed in the same way, by using a mini SQL lexer to figure out where the command boundaries are. Fortunately, the COMMENT commands are no more complex to lex than INSERTs, so we can just re-use the existing lexer for INSERTs. Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
-
- Jun 12, 2014
-
-
Tom Lane authored
Robert Frost is no longer with us, but his copyrights still are, so let's stop using "Stopping by Woods on a Snowy Evening" as test data before somebody decides to sue us. Wordsworth is more safely dead.
-
- Jun 11, 2014
-
-
Tom Lane authored
When we grabbed this file off the Snowball project's website, we mistakenly supposed that it was in LATIN1 encoding, but evidently it was actually in LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too, but there aren't any of those in the file. Other characters used in the file have the same codes in LATIN1 and LATIN2, which no doubt helped hide the problem for so long. The error is not only ours: the Snowball project also was confused about which encoding is required for Hungarian. But dealing with that will require source-code changes that I'm not at all sure we'll wish to back-patch. Fixing the stopword file seems reasonably safe to back-patch however.
-
- Jun 10, 2014
-
-
Tom Lane authored
Commit 9e7e29c7 fixed some problems with LATERAL references in PlaceHolderVars, one of which was that "createplan.c wasn't handling nested PlaceHolderVars properly". I failed to see that this problem might occur in older versions as well; but it can, as demonstrated in bug #10587 from Geoff Speicher. In this case the nesting occurs due to push-down of PlaceHolderVar expressions into a parameterized path. So, back-patch the relevant changes from 9e7e29c7 into 9.2 where parameterized paths were introduced. (Perhaps I'm still being too myopic, but I'm hesitant to change older branches without some evidence that the case can occur there.)
-