Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
P
postgres-lambda-diff
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Jakob Huber
postgres-lambda-diff
Commits
621e14dc
Commit
621e14dc
authored
17 years ago
by
Bruce Momjian
Browse files
Options
Downloads
Patches
Plain Diff
Add "High Availability, Load Balancing, and Replication Feature Matrix"
table to docs.
parent
5db1c58a
No related branches found
Branches containing commit
No related tags found
Tags containing commit
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc/src/sgml/high-availability.sgml
+166
-41
166 additions, 41 deletions
doc/src/sgml/high-availability.sgml
with
166 additions
and
41 deletions
doc/src/sgml/high-availability.sgml
+
166
−
41
View file @
621e14dc
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.1
7
2007/11/0
4
19:
23:24
momjian Exp $ -->
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.1
8
2007/11/0
8
19:
16:30
momjian Exp $ -->
<chapter id="high-availability">
<chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title>
<title>High Availability, Load Balancing, and Replication</title>
...
@@ -92,16 +92,23 @@
...
@@ -92,16 +92,23 @@
</para>
</para>
<para>
<para>
Shared hardware functionality is common in network storage
Shared hardware functionality is common in network storage
devices.
devices.
Using a network file system is also possible, though
Using a network file system is also possible, though
care must be
care must be
taken that the file system has full POSIX behavior
.
taken that the file system has full POSIX behavior
(see <xref
One significant limitation of this
method is that if the shared
linkend="creating-cluster-nfs">).
One significant limitation of this
disk array fails or becomes corrupt, the
primary and standby
method is that if the shared
disk array fails or becomes corrupt, the
servers are both nonfunctional. Another issue is
that the
primary and standby
servers are both nonfunctional. Another issue is
standby server should never access the shared storage while
that the
standby server should never access the shared storage while
the primary server is running.
the primary server is running.
</para>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>File System Replication</term>
<listitem>
<para>
<para>
A modified version of shared hardware functionality is file system
A modified version of shared hardware functionality is file system
replication, where all changes to a file system are mirrored to a file
replication, where all changes to a file system are mirrored to a file
...
@@ -125,7 +132,7 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -125,7 +132,7 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Warm Standby Using Point-In-Time Recovery</term>
<term>Warm Standby Using Point-In-Time Recovery
(<acronym>PITR</>)
</term>
<listitem>
<listitem>
<para>
<para>
...
@@ -190,6 +197,21 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -190,6 +197,21 @@ protocol to make nodes agree on a serializable transactional order.
</listitem>
</listitem>
</varlistentry>
</varlistentry>
<varlistentry>
<term>Asynchronous Multi-Master Replication</term>
<listitem>
<para>
For servers that are not regularly connected, like laptops or
remote servers, keeping data consistent among servers is a
challenge. Using asynchronous multi-master replication, each
server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Synchronous Multi-Master Replication</term>
<term>Synchronous Multi-Master Replication</term>
<listitem>
<listitem>
...
@@ -222,21 +244,6 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -222,21 +244,6 @@ protocol to make nodes agree on a serializable transactional order.
</listitem>
</listitem>
</varlistentry>
</varlistentry>
<varlistentry>
<term>Asynchronous Multi-Master Replication</term>
<listitem>
<para>
For servers that are not regularly connected, like laptops or
remote servers, keeping data consistent among servers is a
challenge. Using asynchronous multi-master replication, each
server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Data Partitioning</term>
<term>Data Partitioning</term>
<listitem>
<listitem>
...
@@ -253,23 +260,6 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -253,23 +260,6 @@ protocol to make nodes agree on a serializable transactional order.
</listitem>
</listitem>
</varlistentry>
</varlistentry>
<varlistentry>
<term>Multi-Server Parallel Query Execution</term>
<listitem>
<para>
Many of the above solutions allow multiple servers to handle
multiple queries, but none allow a single query to use multiple
servers to complete faster. This solution allows multiple
servers to work concurrently on a single query. This is usually
accomplished by splitting the data among servers and having
each server execute its part of the query and return results
to a central server where they are combined and returned to
the user. Pgpool-II has this capability.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry>
<term>Commercial Solutions</term>
<term>Commercial Solutions</term>
<listitem>
<listitem>
...
@@ -285,4 +275,139 @@ protocol to make nodes agree on a serializable transactional order.
...
@@ -285,4 +275,139 @@ protocol to make nodes agree on a serializable transactional order.
</variablelist>
</variablelist>
<para>
The table below (<xref linkend="high-availability-matrix">) summarizes
the capabilities of the various solutions listed above.
</para>
<table id="high-availability-matrix">
<title>High Availability, Load Balancing, and Replication Feature Matrix</title>
<tgroup cols="9">
<thead>
<row>
<entry>Feature</entry>
<entry>Shared Disk Failover</entry>
<entry>File System Replication</entry>
<entry>Warm Standby Using PITR</entry>
<entry>Master-Slave Replication</entry>
<entry>Statement-Based Replication Middleware</entry>
<entry>Asynchronous Multi-Master Replication</entry>
<entry>Synchronous Multi-Master Replication</entry>
<entry>Data Partitioning</entry>
</row>
</thead>
<tbody>
<row>
<entry>No special hardware required</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
<row>
<entry>Allows multiple master servers</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
</row>
<row>
<entry>No master server overhead</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
</row>
<row>
<entry>Master server never locks others</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
</row>
<row>
<entry>Master failure will never lose data</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
</row>
<row>
<entry>Slaves accept read-only queries</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
<row>
<entry>Per-table granularity</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
<row>
<entry>No conflict resolution necessary</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
</row>
</tbody>
</tgroup>
</table>
<para>
Many of the above solutions allow multiple servers to handle multiple
queries, but none allow a single query to use multiple servers to
complete faster. Multi-server parallel query execution allows multiple
servers to work concurrently on a single query. This is usually
accomplished by splitting the data among servers and having each server
execute its part of the query and return results to a central server
where they are combined and returned to the user. Pgpool-II has this
capability.
</para>
</chapter>
</chapter>
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment