diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 2fbe6a8c24b460d6a07944ef8dcc80765091a7bf..9890f9f6f4eb1326cb7a64076fb1c5d83dc3fcc7 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1,4 +1,4 @@ -<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.294 2010/07/08 10:20:13 mha Exp $ --> +<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.295 2010/07/16 11:20:23 heikki Exp $ --> <chapter Id="runtime-config"> <title>Server Configuration</title> @@ -1926,7 +1926,8 @@ SET ENABLE_SEQSCAN TO OFF; doesn't keep any extra segments for standby purposes, and the number of old WAL segments available to standby servers is a function of the location of the previous checkpoint and status of WAL - archiving. This parameter can only be set in the + archiving. This parameter has no effect on restartpoints. + This parameter can only be set in the <filename>postgresql.conf</> file or on the server command line. </para> </listitem> diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index 0c83032968a0568e22cab230dc3d344385dfbd2c..cf580e9f45e8c94c002103428f6159bc994b47de 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -1,4 +1,4 @@ -<!-- $PostgreSQL: pgsql/doc/src/sgml/wal.sgml,v 1.68 2010/07/08 16:44:12 momjian Exp $ --> +<!-- $PostgreSQL: pgsql/doc/src/sgml/wal.sgml,v 1.69 2010/07/16 11:20:23 heikki Exp $ --> <chapter id="wal"> <title>Reliability and the Write-Ahead Log</title> @@ -449,6 +449,7 @@ <para> There will always be at least one WAL segment file, and will normally not be more than (2 + <varname>checkpoint_completion_target</varname>) * <varname>checkpoint_segments</varname> + 1 + or <varname>checkpoint_segments</> + <xref linkend="guc-wal-keep-segments"> + 1 files. Each segment file is normally 16 MB (though this size can be altered when building the server). You can use this to estimate space requirements for <acronym>WAL</acronym>. @@ -460,6 +461,22 @@ of recycled until the system gets back under this limit. </para> + <para> + In archive recovery or standby mode, the server periodically performs + <firstterm>restartpoints</><indexterm><primary>restartpoint</></> + which are similar to checkpoints in normal operation: the server forces + all its state to disk, updates the <filename>pg_control</> file to + indicate that the already-processed WAL data need not be scanned again, + and then recycles any old log segment files in <filename>pg_xlog</> + directory. A restartpoint is triggered if at least one checkpoint record + has been replayed and <varname>checkpoint_timeout</> seconds have passed + since last restartpoint. In standby mode, a restartpoint is also triggered + if <varname>checkoint_segments</> log segments have been replayed since + last restartpoint and at least one checkpoint record has been replayed. + Restartpoints can't be performed more frequently than checkpoints in the + master because restartpoints can only be performed at checkpoint records. + </para> + <para> There are two commonly used internal <acronym>WAL</acronym> functions: <function>LogInsert</function> and <function>LogFlush</function>.