- Jan 25, 2013
-
-
Magnus Hagander authored
Noted by Joe Van Dyk
-
Bruce Momjian authored
Backpatch to 9.2. Patch from Kohei KaiGai
-
Tom Lane authored
Since 9.0, the count parameter has only limited the number of tuples actually returned by the executor. It doesn't affect the behavior of INSERT/UPDATE/DELETE unless RETURNING is specified, because without RETURNING, the ModifyTable plan node doesn't return control to execMain.c for each tuple. And we only check the limit at the top level. While this behavioral change was unintentional at the time, discussion of bug #6572 led us to the conclusion that we prefer the new behavior anyway, and so we should just adjust the docs to match rather than change the code. Accordingly, do that. Back-patch as far as 9.0 so that the docs match the code in each branch.
-
- Jan 24, 2013
-
-
Andrew Dunstan authored
This ensures that mapping of non-ascii prompts to the correct code page occurs. Bug report and original patch from Alexander Law, reviewed and reworked by Noah Misch. Backpatch to all live branches.
-
Simon Riggs authored
The machinery around XLOG_HEAP2_CLEANUP_INFO failed to correctly pass through the necessary information on latestRemovedXid, avoiding cancellations in some infrequent concurrent update/cleanup scenarios. Backpatchable fix to 9.0 Detailed bug report and fix by Noah Misch, backpatchable version by me.
-
Heikki Linnakangas authored
Backpatch to 9.2, like the previous fix.
-
Tom Lane authored
When we eliminated "unnecessary" wakeups of the syslogger process, we broke size-based logfile rotation on Windows, because on that platform data transfer is done in a separate thread. While non-Windows platforms would recheck the output file size after every log message, Windows only did so when the control thread woke up for some other reason, which might be quite infrequent. Per bug #7814 from Tsunezumi. Back-patch to 9.2 where the problem was introduced. Jeff Janes
-
- Jan 23, 2013
-
-
Kevin Grittner authored
In situations where there are over 8MB of empty pages at the end of a table, the truncation work for trailing empty pages takes longer than deadlock_timeout, and there is frequent access to the table by processes other than autovacuum, there was a problem with the autovacuum worker process being canceled by the deadlock checking code. The truncation work done by autovacuum up that point was lost, and the attempt tried again by a later autovacuum worker. The attempts could continue indefinitely without making progress, consuming resources and blocking other processes for up to deadlock_timeout each time. This patch has the autovacuum worker checking whether it is blocking any other thread at 20ms intervals. If such a condition develops, the autovacuum worker will persist the work it has done so far, release its lock on the table, and sleep in 50ms intervals for up to 5 seconds, hoping to be able to re-acquire the lock and try again. If it is unable to get the lock in that time, it moves on and a worker will try to continue later from the point this one left off. While this patch doesn't change the rules about when and what to truncate, it does cause the truncation to occur sooner, with less blocking, and with the consumption of fewer resources when there is contention for the table's lock. The only user-visible change other than improved performance is that the table size during truncation may change incrementally instead of just once. Backpatched to 9.0 from initial master commit at b19e4250 -- before that the differences are too large to be clearly safe. Jan Wieck
-
- Jan 21, 2013
-
-
Tom Lane authored
This bug goes back to the original Postgres95 sources. Its significance to modern PG versions is marginal, since we have not used PQprintTuples() internally in a very long time, and it doesn't seem to have ever been documented either. Still, it *is* exposed to client apps, so somebody out there might possibly be using it. Xi Wang
-
Tom Lane authored
The code failed to detect an out-of-memory failure. Xi Wang
-
Peter Eisentraut authored
Leading white space before the "http:" is apparently treated as a relative link at least by some browsers.
-
- Jan 20, 2013
-
-
Magnus Hagander authored
Josh Kupershmidt
-
- Jan 19, 2013
-
-
Tom Lane authored
Un-double the backslashes in the LIKE patterns, since standard_conforming_strings is now the default. Just to be sure, include a command to set standard_conforming_strings to ON in the example. Back-patch to 9.1, where standard_conforming_strings became the default. Josh Kupershmidt, reviewed by Jeff Janes
-
Andrew Dunstan authored
Complaint and patch from Zoltán Böszörményi. When cross-compiling, the native make doesn't know about the Windows .exe suffix, so it only builds with it when explicitly told to do so. The native make will not see the link between the target name and the built executable, and might this do unnecesary work, but that's a bigger problem than this one, if in fact we consider it a problem at all. Back-patch to all live branches.
-
Tom Lane authored
Use of SnapshotNow is known to expose us to race conditions if the tuple(s) being sought could be updated by concurrently-committing transactions. CREATE DATABASE and DROP DATABASE are particularly exposed because they do heavyweight filesystem operations during their scans of pg_tablespace, so that the scans run for a very long time compared to most. Furthermore, the potential consequences of a missed or twice-visited row are nastier than average: * createdb() could fail with a bogus "file already exists" error, or silently fail to copy one or more tablespace's worth of files into the new database. * remove_dbtablespaces() could miss one or more tablespaces, thus failing to free filesystem space for the dropped database. * check_db_file_conflict() could likewise miss a tablespace, leading to an OID conflict that could result in data loss either immediately or in future operations. (This seems of very low probability, though, since a duplicate database OID would be unlikely to start with.) Hence, it seems worth fixing these three places to use MVCC snapshots, even though this will someday be superseded by a generic solution to SnapshotNow race conditions. Back-patch to all active branches. Stephen Frost and Tom Lane
-
- Jan 18, 2013
-
-
Robert Haas authored
This got broken in the original fast-path locking patch, because I failed to account for the fact that Hot Standby startup process might take a strong relation lock on a relation in a database to which it is not bound, and confused MyDatabaseId with the database ID of the relation being locked. Report and diagnosis by Andres Freund. Final form of patch by me.
-
- Jan 15, 2013
-
-
Heikki Linnakangas authored
"none" could mislead to think that you're connected a database with that name. Also, it needs to be translated, which might be hard without some context. So in back-branches, use empty string, so that the message is (currently ""), which is at least unambiguous and doens't require translation. In master, it's no problem to add translatable strings, so use a different fix there.
-
Heikki Linnakangas authored
Backpatch all the way to 8.3. Fixes bug #7811, per report and diagnosis by Meng Qingzhong.
-
- Jan 14, 2013
-
-
Tom Lane authored
Dates outside the supported range could be entered, but would not print reasonably, and operations such as conversion to timestamp wouldn't behave sanely either. Since this has the potential to result in undumpable table data, it seems worth back-patching. Hitoshi Harada
-
Tom Lane authored
This seems to have been invented in 2011 to represent GMT+3, non daylight savings rules, as now used in Europe/Kaliningrad and Europe/Minsk. There are no conflicts so might as well add it to the Default list. Per bug #7804 from Ruslan Izmaylov.
-
- Jan 12, 2013
-
-
Andrew Dunstan authored
This is now used by ecpg tests, and not clobbered by pg_upgrade tests. This change won't affect anything that doesn't set this environment variable, but will enable the buildfarm to control exactly what port regression test installs will be running on, and thus to detect possible rogue postmasters more easily. Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
-
- Jan 11, 2013
-
-
Tom Lane authored
This partially reverts commit 21a39de5, restoring the pre-9.2 cost estimates for index usage. That change introduced much too large a bias against larger indexes, as per reports from Jeff Janes and others. The whole thing needs a rewrite, which I've done in HEAD, but the safest thing to do in 9.2 is just to undo this multiplier change.
-
- Jan 09, 2013
-
-
Magnus Hagander authored
JiangGuiqing
-
Tom Lane authored
If VirtualXactLock() has to wait for a transaction that holds its VXID lock as a fast-path lock, it must first convert the fast-path lock to a regular lock. It failed to take the required "partition" lock on the main shared-memory lock table while doing so. This is the direct cause of the assert failure in GetLockStatusData() recently observed in the buildfarm, but more worryingly it could result in arbitrary corruption of the shared lock table if some other process were concurrently engaged in modifying the same partition of the lock table. Fortunately, VirtualXactLock() is only used by CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY, so the opportunities for failure are fewer than they might have been. In passing, improve some comments and be a bit more consistent about order of operations.
-
- Jan 04, 2013
-
-
Tom Lane authored
SPI_execute() and related functions create a CachedPlan, execute it once, and immediately discard it, so that the functionality offered by plancache.c is of no value in this code path. And performance measurements show that the extra data copying and invalidation checking done by plancache.c slows down simple queries by 10% or more compared to 9.1. However, enough of the SPI code is shared with functions that do need plan caching that it seems impractical to bypass plancache.c altogether. Instead, let's invent a variant version of cached plans that preserves 99% of the API but doesn't offer any of the actual functionality, nor the overhead. This puts SPI_execute() performance back on par, or maybe even slightly better, than it was before. This change should resolve recent complaints of performance degradation from Dong Ye, Pavel Stehule, and others. By avoiding data copying, this change also reduces the amount of memory needed to execute many-statement SPI_execute() strings, as for instance in a recent complaint from Tomas Vondra. An additional benefit of this change is that multi-statement SPI_execute() query strings are now processed fully serially, that is we complete execution of earlier statements before running parse analysis and planning on following ones. This eliminates a long-standing POLA violation, in that DDL that affects the behavior of a later statement will now behave as expected. Back-patch to 9.2, since this was a performance regression compared to 9.1. (In 9.2, place the added struct fields so as to avoid changing the offsets of existing fields.) Heikki Linnakangas and Tom Lane
-
Tom Lane authored
On non-Windows machines, we use the Unix socket for connections to test postmasters, so there is no need to create a TCP socket. Furthermore, doing so causes failures due to port conflicts if two builds are carried out concurrently on one machine. (If the builds are done in different chroots, which is standard practice at least in Red Hat distros, there is no risk of conflict on the Unix socket.) Suppressing the TCP socket by setting listen_addresses to empty has long been standard practice for pg_regress, and pg_upgrade knows about this too ... but pg_upgrade's test.sh didn't get the memo. Back-patch to 9.2, and also sync the 9.2 version of the script with HEAD as much as practical.
-
- Jan 03, 2013
-
-
Heikki Linnakangas authored
If you take a base backup from a standby server with "pg_basebackup -X fetch", and the timeline switches while the backup is being taken, the backup used to fail with an error "requested WAL segment %s has already been removed". This is because the server-side code that sends over the required WAL files would not construct the WAL filename with the correct timeline after a switch. Fix that by using readdir() to scan pg_xlog for all the WAL segments in the range, regardless of timeline. Also, include all timeline history files in the backup, if taken with "-X fetch". That fixes another related bug: If a timeline switch happened just before the backup was initiated in a standby, the WAL segment containing the initial checkpoint record contains WAL from the older timeline too. Recovery will not accept that without a timeline history file that lists the older timeline. Backpatch to 9.2. Versions prior to that were not affected as you could not take a base backup from a standby before 9.2.
-
- Jan 01, 2013
-
-
Bruce Momjian authored
Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
-
- Dec 30, 2012
-
-
Heikki Linnakangas authored
The cascading standby patch in 9.2 changed the way WAL files are treated when restored from the archive. Before, they were restored under a temporary filename, and not kept in pg_xlog, but after the patch, they were copied under pg_xlog. This is necessary for a cascading standby to find them, but it also means that if the archive goes offline and a standby is restarted, it can recover back to where it was using the files in pg_xlog. It also means that if you take an offline backup from a standby server, it includes all the required WAL files in pg_xlog. However, the same change was not made to timeline history files, so if the WAL segment containing the checkpoint record contains a timeline switch, you will still get an error if you try to restart recovery without the archive, or recover from an offline backup taken from the standby. With this patch, timeline history files restored from archive are copied into pg_xlog like WAL files are, so that pg_xlog contains all the files required to recover. This is a corner-case pre-existing issue in 9.2, but even more important in master where it's possible for a standby to follow a timeline switch through streaming replication. To make that possible, the timeline history files must be present in pg_xlog.
-
Peter Eisentraut authored
Parts of the description had claimed incorrect pg_hba.conf option names for LDAP authentication. Albe Laurenz
-
- Dec 24, 2012
-
-
Tom Lane authored
Code review for commit 2f582f76: don't use a static variable for what ought to be a deparse_context field, fix non-multibyte-safe test for spaces, avoid useless and potentially O(N^2) (though admittedly with a very small constant) calculations of wrap positions when we aren't going to wrap.
-
- Dec 23, 2012
-
-
Tom Lane authored
transformExpr() is required to cope with already-transformed expression trees, for various ugly-but-not-quite-worth-cleaning-up reasons. However, some of its newer subroutines hadn't gotten the memo. This accounts for bug #7763 from Norbert Buchmuller: transformRowExpr() was overwriting the previously determined type of a RowExpr during CREATE TABLE LIKE INCLUDING INDEXES. Additional investigation showed that transformXmlExpr had the same kind of problem, but all the other cases seem to be safe. Andres Freund and Tom Lane
-
- Dec 22, 2012
-
-
Tom Lane authored
"GetForeignTableColumnOptions" should be "GetForeignColumnOptions". Noted by Metin Döşlü.
-
- Dec 21, 2012
-
-
Heikki Linnakangas authored
If a relation file was removed when the server-side counterpart of pg_basebackup was just about to open it to send it to the client, you'd get a "could not open file" error. Fix that. Backpatch to 9.1, this goes back to when pg_basebackup was introduced.
-
Peter Eisentraut authored
-
- Dec 20, 2012
-
-
Tom Lane authored
If pg_extension_config_dump() is executed again for a table already listed in the extension's extconfig, the code was blindly making a new array entry. This does not seem useful. Fix it to replace the existing array entry instead, so that it's possible for extension update scripts to alter the filter conditions for configuration tables. In addition, teach ALTER EXTENSION DROP TABLE to check for an extconfig entry for the target table, and remove it if present. This is not a 100% solution because it's allowed for an extension update script to just summarily DROP a member table, and that code path doesn't go through ExecAlterExtensionContentsStmt. We could probably make that case clean things up if we had to, but it would involve sticking a very ugly wart somewhere in the guts of dependency.c. Since on the whole it seems quite unlikely that extension updates would want to remove pre-existing configuration tables, making the case possible with an explicit command seems sufficient. Per bug #7756 from Regina Obe. Back-patch to 9.1 where extensions were introduced.
-
Heikki Linnakangas authored
After the recovery target timeline is changed, we would still recycle and preallocate WAL segments on the old target timeline. Those WAL segments created for the old timeline are a waste of space, although otherwise harmless. The problem is that when installing a recycled WAL segment as a future one, ThisTimeLineID is used to construct the filename. ThisTimeLineID is initialized in the checkpointer process to the recovery target timeline at startup, but it was not updated when the startup process chooses a new target timeline (recovery_target_timeline='latest'). To fix, always update ThisTimeLineID before recycling WAL segments at a restartpoint. This still leaves a small window where we might install WAL segments under wrong timeline ID, if the target timeline is changed just as we're about to start recycling. Also, when we're not on the target timeline yet, but still replaying some older timeline, we'll install WAL segments to the newer timeline anyway and they will still go wasted. We'll just live with the waste in that situation. Commit to 9.2 and 9.1. Older versions didn't change recovery target timeline after startup, and for master, I'll commit a slightly different variant of this.
-
- Dec 19, 2012
-
-
Heikki Linnakangas authored
If you restored from a backup taken from a standby, and the last record in the backup is the checkpoint record, ie. there is no redo required except for the checkpoint record, we would fail to notice that we've reached the end-of-backup point, and the database is consistent. The result was an error "WAL ends before end of online backup". To fix, move the have-we-reached-end-of-backup check into CheckRecoveryConsistency(), which is already responsible for similar checks with minRecoveryPoint, and is called in the right places. Backpatch to 9.2, this check and bug did not exist before that.
-
- Dec 18, 2012
-
-
Tom Lane authored
Some versions of libedit expose bogus definitions of setproctitle(), optreset, and perhaps other symbols that we don't want configure to pick up on. There was a previous report of similar problems with strlcpy(), which we addressed in commit 59cf88da, but the problem has evidently grown in scope since then. In hopes of not having to deal with it again in future, rearrange configure's tests for supplied functions so that we ignore libedit/libreadline except when probing specifically for functions we expect them to provide. Per report from Christoph Berg, though this is slightly more aggressive than his proposed patch.
-
Tom Lane authored
During crash recovery, we remove disk files belonging to temporary tables, but the system catalog entries for such tables are intentionally not cleaned up right away. Instead, the first backend that uses a temp schema is expected to clean out any leftover objects therein. This approach requires that we be careful to ignore leftover temp tables (since any actual access attempt would fail), *even if their BackendId matches our session*, if we have not yet established use of the session's corresponding temp schema. That worked fine in the past, but was broken by commit debcec7d which incorrectly removed the rd_islocaltemp relcache flag. Put it back, and undo various changes that substituted tests like "rel->rd_backend == MyBackendId" for use of a state-aware flag. Per trouble report from Heikki Linnakangas. Back-patch to 9.1 where the erroneous change was made. In the back branches, be careful to add rd_islocaltemp in a spot in the struct that was alignment padding before, so as not to break existing add-on code.
-