- Jan 23, 2013
-
-
Bruce Momjian authored
With AtEOXact applied, --single-transaction makes pg_restore slower, and has the potential to require lock table configuration, so remove the argument. Per suggestion from Tom.
-
- Jan 18, 2013
-
-
Bruce Momjian authored
If the cluster alignments don't match, output this suggestion: Likely one cluster is a 32-bit install, the other 64-bit
-
- Jan 14, 2013
-
-
Tom Lane authored
In commit 71450d7f, we added code to inform suitably-intelligent compilers that ereport() doesn't return if the elevel is ERROR or higher. This patch extends that to elog(), and also fixes a double-evaluation hazard that the previous commit created in ereport(), as well as reducing the emitted code size. The elog() improvement requires the compiler to support __VA_ARGS__, which should be available in just about anything nowadays since it's required by C99. But our minimum language baseline is still C89, so add a configure test for that. The previous commit assumed that ereport's elevel could be evaluated twice, which isn't terribly safe --- there are already counterexamples in xlog.c. On compilers that have __builtin_constant_p, we can use that to protect the second test, since there's no possible optimization gain if the compiler doesn't know the value of elevel. Otherwise, use a local variable inside the macros to prevent double evaluation. The local-variable solution is inferior because (a) it leads to useless code being emitted when elevel isn't constant, and (b) it increases the optimization level needed for the compiler to recognize that subsequent code is unreachable. But it seems better than not teaching non-gcc compilers about unreachability at all. Lastly, if the compiler has __builtin_unreachable(), we can use that instead of abort(), resulting in a noticeable code savings since no function call is actually emitted. However, it seems wise to do this only in non-assert builds. In an assert build, continue to use abort(), so that the behavior will be predictable and debuggable if the "impossible" happens. These changes involve making the ereport and elog macros emit do-while statement blocks not just expressions, which forces small changes in a few call sites. Andres Freund, Tom Lane, Heikki Linnakangas
-
- Jan 12, 2013
-
-
Andrew Dunstan authored
This is now used by ecpg tests, and not clobbered by pg_upgrade tests. This change won't affect anything that doesn't set this environment variable, but will enable the buildfarm to control exactly what port regression test installs will be running on, and thus to detect possible rogue postmasters more easily. Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
-
- Jan 09, 2013
-
-
Bruce Momjian authored
This patch implements parallel copying/linking of files by tablespace using the --jobs option in pg_upgrade.
-
- Jan 07, 2013
-
-
Tatsuo Ishii authored
(-i), producing only one progress message per 5 seconds along with elapsed time and estimated remaining time. Also add elapsed time and estimated remaining time to the default logging(prints one message each 100000 rows). Patch contributed by Tomas Vondra, reviewed by Jeevan Chalke and Tatsuo Ishii.
-
- Jan 04, 2013
-
-
Tom Lane authored
On non-Windows machines, we use the Unix socket for connections to test postmasters, so there is no need to create a TCP socket. Furthermore, doing so causes failures due to port conflicts if two builds are carried out concurrently on one machine. (If the builds are done in different chroots, which is standard practice at least in Red Hat distros, there is no risk of conflict on the Unix socket.) Suppressing the TCP socket by setting listen_addresses to empty has long been standard practice for pg_regress, and pg_upgrade knows about this too ... but pg_upgrade's test.sh didn't get the memo. Back-patch to 9.2, and also sync the 9.2 version of the script with HEAD as much as practical.
-
- Jan 03, 2013
-
-
Bruce Momjian authored
Adjust pg_upgrade page conversion functions (which are not used) to return void so transfer_all_new_dbs can return void.
-
- Jan 01, 2013
-
-
Bruce Momjian authored
Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
-
- Dec 27, 2012
-
-
Bruce Momjian authored
Add pg_upgrade --jobs, which allows parallel dump/restore of databases, which improves performance.
-
- Dec 20, 2012
-
-
Bruce Momjian authored
Because the client encoding might not match the server encoding, pg_upgrade can't allocate NAMEDATALEN bytes for storage of database, relation, and namespace identifiers. Instead pg_strdup() the memory and free it. Also add C comment in initdb.c about safe NAMEDATALEN usage.
-
Bruce Momjian authored
Add comment stating that constraint and index names must match.
-
- Dec 11, 2012
-
-
Bruce Momjian authored
All versions of pg_upgrade upgraded invalid indexes caused by CREATE INDEX CONCURRENTLY failures and marked them as valid. The patch adds a check to all pg_upgrade versions and throws an error during upgrade or --check. Backpatch to 9.2, 9.1, 9.0. Patch slightly adjusted.
-
Andrew Dunstan authored
Normally each module is tested in a database named contrib_regression, which is dropped and recreated at the beginhning of each pg_regress run. This new mode, enabled by adding USE_MODULE_DB=1 to the make command line, runs most modules in a database with the module name embedded in it. This will make testing pg_upgrade on clusters with the contrib modules a lot easier. Second attempt at this, this time accomodating make versions older than 3.82. Still to be done: adapt to the MSVC build system. Backpatch to 9.0, which is the earliest version it is reasonably possible to test upgrading from.
-
Bruce Momjian authored
Fix previous commit that added synchronous_commit=off, but broke -O/-o due to missing space in argument passing. Backpatch to 9.2.
-
- Dec 07, 2012
-
-
Bruce Momjian authored
Pg_upgrade displays file names during copy and database names during dump/restore. Andrew Dunstan identified three bugs: * long file names were being truncated to 60 _leading_ characters, which often do not change for long file names * file names were truncated to 60 characters in log files * carriage returns were being output to log files This commit fixes these --- it prints 60 _trailing_ characters to the status display, and full path names without carriage returns to log files. It also suppresses status output to the log file unless verbose mode is used.
-
- Dec 06, 2012
-
-
Alvaro Herrera authored
Background workers are postmaster subprocesses that run arbitrary user-specified code. They can request shared memory access as well as backend database connections; or they can just use plain libpq frontend database connections. Modules listed in shared_preload_libraries can register background workers in their _PG_init() function; this is early enough that it's not necessary to provide an extra GUC option, because the necessary extra resources can be allocated early on. Modules can install more than one bgworker, if necessary. Care is taken that these extra processes do not interfere with other postmaster tasks: only one such process is started on each ServerLoop iteration. This means a large number of them could be waiting to be started up and postmaster is still able to quickly service external connection requests. Also, shutdown sequence should not be impacted by a worker process that's reasonably well behaved (i.e. promptly responds to termination signals.) The current implementation lets worker processes specify their start time, i.e. at what point in the server startup process they are to be started: right after postmaster start (in which case they mustn't ask for shared memory access), when consistent state has been reached (useful during recovery in a HOT standby server), or when recovery has terminated (i.e. when normal backends are allowed). In case of a bgworker crash, actions to take depend on registration data: if shared memory was requested, then all other connections are taken down (as well as other bgworkers), just like it were a regular backend crashing. The bgworker itself is restarted, too, within a configurable timeframe (which can be configured to be never). More features to add to this framework can be imagined without much effort, and have been discussed, but this seems good enough as a useful unit already. An elementary sample module is supplied. Author: Álvaro Herrera This patch is loosely based on prior patches submitted by KaiGai Kohei, and unsubmitted code by Simon Riggs. Reviewed by: KaiGai Kohei, Markus Wanner, Andres Freund, Heikki Linnakangas, Simon Riggs, Amit Kapila
-
- Dec 05, 2012
-
-
Heikki Linnakangas authored
Fujii Masao, reviewed by Kyotaro Horiguchi.
-
- Dec 04, 2012
-
-
Bruce Momjian authored
report is clearer.
-
Bruce Momjian authored
executed.
-
Bruce Momjian authored
storage. Have pg_upgrade use it, and enable server options fsync=off and full_page_writes=off. Document that users turning fsync from off to on should run initdb --sync-only. [ Previous commit was incorrectly applied as a git merge. ]
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Bruce Momjian authored
-
- Dec 03, 2012
-
-
Andrew Dunstan authored
This reverts commit e2b3c21b.
-
- Dec 02, 2012
-
-
Andrew Dunstan authored
Normally each module is tested in aq database named contrib_regression, which is dropped and recreated at the beginhning of each pg_regress run. This mode, enabled by adding USE_MODULE_DB=1 to the make command line, runs most modules in a database with the module name embedded in it. This will make testing pg_upgrade on clusters with the contrib modules a lot easier. Still to be done: adapt to the MSVC build system. Backpatch to 9.0, which is the earliest version it is reasonably possible to test upgrading from.
-
- Dec 01, 2012
-
-
Bruce Momjian authored
-
Bruce Momjian authored
In pg_upgrade, remove pg_restore's --single-transaction option, as it throws errors in certain cases.
-
Bruce Momjian authored
certain cases.
-
Bruce Momjian authored
status output for dump/restore.
-
- Nov 30, 2012
-
-
Tom Lane authored
It's not safe to examine a shared buffer without any lock.
-
Bruce Momjian authored
--single-transaction to restore each database schema. This yields performance improvements for databases with many tables. Also, remove split_old_dump() as it is no longer needed.
-
Bruce Momjian authored
consistency. Per suggestion from Tom.
-
Andrew Dunstan authored
This removes exisiting PG settings from the environment for pg_upgrade tests, just like pg_regress does.
-
- Nov 29, 2012
-
-
Tom Lane authored
Commit 8cb53654, which introduced DROP INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor choice of catalog state representation. The pg_index state for an index that's reached the final pre-drop stage was the same as the state for an index just created by CREATE INDEX CONCURRENTLY. This meant that the (necessary) change to make RelationGetIndexList ignore about-to-die indexes also made it ignore freshly-created indexes; which is catastrophic because the latter do need to be considered in HOT-safety decisions. Failure to do so leads to incorrect index entries and subsequently wrong results from queries depending on the concurrently-created index. To fix, add an additional boolean column "indislive" to pg_index, so that the freshly-created and about-to-die states can be distinguished. (This change obviously is only possible in HEAD. This patch will need to be back-patched, but in 9.2 we'll use a kluge consisting of overloading the formerly-impossible state of indisvalid = true and indisready = false.) In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index flag changes they make without exclusive lock on the index are made via heap_inplace_update() rather than a normal transactional update. The latter is not very safe because moving the pg_index tuple could result in concurrent SnapshotNow scans finding it twice or not at all, thus possibly resulting in index corruption. This is a pre-existing bug in CREATE INDEX CONCURRENTLY, which was copied into the DROP code. In addition, fix various places in the code that ought to check to make sure that the indexes they are manipulating are valid and/or ready as appropriate. These represent bugs that have existed since 8.2, since a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid index behind, and we ought not try to do anything that might fail with such an index. Also fix RelationReloadIndexInfo to ensure it copies all the pg_index columns that are allowed to change after initial creation. Previously we could have been left with stale values of some fields in an index relcache entry. It's not clear whether this actually had any user-visible consequences, but it's at least a bug waiting to happen. In addition, do some code and docs review for DROP INDEX CONCURRENTLY; some cosmetic code cleanup but mostly addition and revision of comments. This will need to be back-patched, but in a noticeably different form, so I'm committing it to HEAD before working on the back-patch. Problem reported by Amit Kapila, diagnosis by Pavan Deolassee, fix by Tom Lane and Andres Freund.
-
- Nov 25, 2012
-
-
Bruce Momjian authored
centralizing error/shutdown code.
-
Bruce Momjian authored
pg_malloc/pg_free.
-
- Nov 19, 2012
-
-
Bruce Momjian authored
error and errno != ENOENT.
-
- Nov 18, 2012
-
-
Tom Lane authored
The previous definitions of these GUC variables allowed them to range up to INT_MAX, but in point of fact the underlying code would suffer overflows or other errors with large values. Reduce the maximum values to something that won't misbehave. There's no apparent value in working harder than this, since very large delays aren't sensible for any of these. (Note: the risk with archive_timeout is that if we're late checking the state, the timestamp difference it's being compared to might overflow. So we need some amount of slop; the choice of INT_MAX/2 is arbitrary.) Per followup investigation of bug #7670. Although this isn't a very significant fix, might as well back-patch.
-
- Nov 15, 2012
-
-
Bruce Momjian authored
-