CVS User Account cvsuser
Thu Jun 15 12:23:42 PDT 2006
Log Message:
-----------
Updates to "best practices", added a considerable number of index
entries.

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.sgml (r1.12 -> r1.13)
        adminscripts.sgml (r1.34 -> r1.35)
        bestpractices.sgml (r1.16 -> r1.17)
        cluster.sgml (r1.11 -> r1.12)
        concepts.sgml (r1.18 -> r1.19)
        defineset.sgml (r1.23 -> r1.24)
        dropthings.sgml (r1.13 -> r1.14)
        failover.sgml (r1.20 -> r1.21)
        firstdb.sgml (r1.18 -> r1.19)
        help.sgml (r1.16 -> r1.17)
        installation.sgml (r1.23 -> r1.24)
        intro.sgml (r1.23 -> r1.24)
        maintenance.sgml (r1.18 -> r1.19)
        monitoring.sgml (r1.22 -> r1.23)
        plainpaths.sgml (r1.12 -> r1.13)
        reshape.sgml (r1.17 -> r1.18)
        slonik_ref.sgml (r1.48 -> r1.49)
        startslons.sgml (r1.15 -> r1.16)
        supportedplatforms.sgml (r1.5 -> r1.6)
        testbed.sgml (r1.7 -> r1.8)
        usingslonik.sgml (r1.15 -> r1.16)

-------------- next part --------------
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.23
retrieving revision 1.24
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.23 -r1.24
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -2,6 +2,8 @@
 <sect1 id="definingsets">
 <title>Defining &slony1; Replication Sets</title>
 
+<indexterm><primary>defining replication sets</primary></indexterm>
+
 <para>Defining the nodes indicated the shape of the cluster of
 database servers; it is now time to determine what data is to be
 copied between them.  The groups of data that are copied are defined
@@ -20,6 +22,8 @@
 
 <sect2><title>Primary Keys</title>
 
+<indexterm><primary>primary key requirement</primary></indexterm>
+
 <para>&slony1; <emphasis>needs</emphasis> to have a primary key or
 candidate thereof on each table that is replicated.  PK values are
 used as the primary identifier for each tuple that is modified in the
@@ -136,6 +140,8 @@
 
 <sect2> <title> The Pathology of Sequences </title>
 
+<indexterm><primary>sequence pathology</primary></indexterm>
+
 <para> Each time a SYNC is processed, values are recorded for
 <emphasis>all</emphasis> of the sequences in the set.  If there are a
 lot of sequences, this can cause <xref linkend="table.sl-seqlog"> to
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.34
retrieving revision 1.35
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.34 -r1.35
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -2,6 +2,8 @@
 <sect1 id="altperl">
 <title>&slony1; Administration Scripts</title>
 
+<indexterm><primary>administration scripts for &slony1;</primary></indexterm>
+
 <para>In the <filename>altperl</filename> directory in the
 <application>CVS</application> tree, there is a sizable set of
 <application>Perl</application> scripts that may be used to administer
@@ -21,6 +23,7 @@
 linkend="slonik">.</para>
 
 <sect2><title>Node/Cluster Configuration - cluster.nodes</title>
+<indexterm><primary>cluster.nodes - node/cluster configuration for Perl tools</primary></indexterm>
 
 <para>The UNIX environment variable <envar>SLONYNODES</envar> is used
 to determine what Perl configuration file will be used to control the
@@ -73,6 +76,7 @@
 </programlisting>
 </sect2>
 <sect2><title>Set configuration - cluster.set1, cluster.set2</title>
+<indexterm><primary>cluster.set1 - replication set configuration for Perl tools</primary></indexterm>
 
 <para>The UNIX environment variable <envar>SLONYSET</envar> is used to
 determine what Perl configuration file will be used to determine what
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.48
retrieving revision 1.49
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.48 -r1.49
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -2418,13 +2418,13 @@
     that column later in the same request. </para>
 
     <para> In &slony1; version 1.2, the DDL script is split into
-    statements, and each is submitted separately.  As a result, it is
-    fine for later statements to refer to objects or attributes
-    created or modified in earlier statements.  Furthermore, in
-    version 1.2, the <command>slonik</command> output includes a
-    listing of each statement as it is processed, on the set origin
-    node.  Similarly, the statements processed are listed in slon logs
-    on the other nodes.</para>
+    statements, and each statement is submitted separately.  As a
+    result, it is fine for later statements to refer to objects or
+    attributes created or modified in earlier statements.
+    Furthermore, in version 1.2, the <command>slonik</command> output
+    includes a listing of each statement as it is processed, on the
+    set origin node.  Similarly, the statements processed are listed
+    in slon logs on the other nodes.</para>
    </refsect1>
   </refentry>
 
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.16
retrieving revision 1.17
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.16 -r1.17
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -1,6 +1,7 @@
 <!-- $Id$ -->
 <sect1 id="help">
 <title>More &slony1; Help</title>
+<indexterm><primary>help - how to get more assistance</primary></indexterm>
 
 <para>If you are having problems with &slony1;, you have several
 options for help:
Index: cluster.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/cluster.sgml -Ldoc/adminguide/cluster.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/cluster.sgml
+++ doc/adminguide/cluster.sgml
@@ -1,9 +1,7 @@
 <!-- $Id$ -->
 <sect1 id="cluster">
 <title>Defining &slony1; Clusters</title>
-<indexterm>
- <primary>cluster</primary>
-</indexterm>
+<indexterm>  <primary>cluster definition</primary> </indexterm>
 
 <para>A &slony1; cluster is the basic grouping of database instances
 in which replication takes place.  It consists of a set of &postgres;
Index: bestpractices.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/bestpractices.sgml,v
retrieving revision 1.16
retrieving revision 1.17
diff -Ldoc/adminguide/bestpractices.sgml -Ldoc/adminguide/bestpractices.sgml -u -w -r1.16 -r1.17
--- doc/adminguide/bestpractices.sgml
+++ doc/adminguide/bestpractices.sgml
@@ -37,7 +37,11 @@
 between &slony1; versions, local libraries, and &postgres; libraries.
 Details count; you need to be clear on what hosts are running what
 versions of what software.
-</para>
+
+<para> This is normally a matter of being meticulous about checking
+what versions of software are in place everywhere, and is the natural
+result of having a distributed system comprised of a large number of
+components that need to match. </para>
 </listitem>
 
 <listitem>
@@ -132,16 +136,17 @@
 running the &lslon; on the database server that it is
 servicing. </para>
 
-<para> In practice, having the &lslon; processes strewn
-across a dozen servers turns out to be really inconvenient to manage,
-as making changes to their configuration requires logging onto a whole
-bunch of servers.  In environments where it is necessary to use
+<para> In practice, having the &lslon; processes strewn across a dozen
+servers turns out to be really inconvenient to manage, as making
+changes to their configuration requires logging onto a whole bunch of
+servers.  In environments where it is necessary to use
 <application>sudo</application> for users to switch to application
 users, this turns out to be seriously inconvenient.  It turns out to
 be <emphasis>much</emphasis> easier to manage to group the <xref
 linkend="slon"> processes on one server per local network, so that
 <emphasis>one</emphasis> script can start, monitor, terminate, and
-otherwise maintain <emphasis>all</emphasis> of the nearby nodes.</para>
+otherwise maintain <emphasis>all</emphasis> of the nearby
+nodes.</para>
 
 <para> That also has the implication that configuration data and
 configuration scripts only need to be maintained in one place,
@@ -149,9 +154,9 @@
 </listitem>
 
 <listitem>
-<para>The <link linkend="ddlchanges"> Database Schema
-Changes </link> section outlines some practices that have been found
-useful for handling changes to database schemas. </para></listitem>
+<para>The <link linkend="ddlchanges"> Database Schema Changes </link>
+section outlines some practices that have been found useful for
+handling changes to database schemas. </para></listitem>
 
 <listitem>
 <para> Handling of Primary Keys </para> 
@@ -199,6 +204,90 @@
 outage. </para>
 </listitem>
 
+<listitem><para> What to do about DDL. </para>
+
+<para> &slony1; operates via detecting updates to table data via
+triggers that are attached to those tables.  That means that updates
+that take place via methods that do not fire triggers will not notice
+those updates.  <command>ALTER TABLE</command>, <command>CREATE OR
+REPLACE FUNCTION</command>, <command>CREATE TABLE</command>, all
+represent SQL requests that &slony1; has no way to notice. </para>
+
+<para> A philosophy underlying &slony1;'s handling of this is that
+competent system designers do not write self-modifying code, and
+database schemas that get modified by the application are an instance
+of this.  It does not try hard to make it convenient to modify
+database schemas. </para>
+
+<para> There will be cases where that is necessary, so the <link
+linkend="stmtddlscript"> <command>execute script</command> is provided
+which will apply DDL changes at the same location in the transaction
+stream on all servers.  </link> </para>
+
+<para> Unfortunately, this introduces a great deal of locking of
+database objects.  Altering tables requires taking out an exclusive
+lock on them; doing so via <command>execute script</command> requires
+that &slony1; take out an exclusive lock on <emphasis>all</emphasis>
+replicated tables.  This can prove quite inconvenient when
+applications are running; you run into deadlocks and such. </para>
+
+<para> One particularly dogmatic position that some hold is that
+<emphasis>all</emphasis> schema changes should
+<emphasis>always</emphasis> be propagated using <command>execute
+script</command>.  This guarantees that nodes will be consistent, but
+the costs of locking and deadlocking may be too high for some
+users.</para>
+
+<para> At Afilias, our approach has been less dogmatic; there
+<emphasis>are</emphasis> sorts of changes that
+<emphasis>must</emphasis> be applied using <command>execute
+script</command>, but we apply others independently.</para>
+
+<itemizedlist>
+<listitem><para> Changes that must be applied using <command>execute script</command> </para>
+<itemizedlist>
+<listitem><para> All instances of <command>ALTER TABLE</command></para></listitem>
+</itemizedlist>
+
+</listitem>
+<listitem><para> Changes that are not normally applied using <command>execute script</command> </para>
+<itemizedlist>
+<listitem><para> <command>CREATE INDEX</command> </para></listitem>
+<listitem><para> <command>CREATE TABLE</command> </para>
+<para> Tables that are not being replicated do not require &slony1; <quote>permission</quote>. </para></listitem>
+
+<listitem><para> <command>CREATE OR REPLACE FUNCTION </command> 
+
+<para> Typically, new versions of functions may be done without
+&slony1; being <quote>aware</quote> of them.  The obvious exception is
+when a new function is being deployed to accomodate a table
+alteration; in that case, the new version must be added in in a manner
+synchronized with the <command>execute script</command> for the table
+alteration. </para>
+
+<para> Similarly, <command>CREATE TYPE</command>, <command> CREATE
+AGGREGATE </command>,  and such will
+commonly not need to be forcibly applied in <quote>perfectly
+synchronized</quote> manner across nodes. </para></listitem>
+
+<listitem><para> Security management, such as <command> CREATE USER
+</command>, <command> CREATE ROLE </command>, <command>GRANT
+</command>, and such are largely irrelevant to &slony1; as it runs as
+a <quote>superuser</quote>. </para>
+
+<para> Indeed, we have frequently found it useful to have different
+security arrangements on different nodes.  Access to the
+<quote>master</quote> node should be restricted to applications that
+truly need access to it; <quote>reporting</quote> users commonly are
+restricted much more there than on subscriber nodes.</para>
+
+</listitem>
+</itemizedlist>
+</listitem>
+</itemizedlist>
+
+</listitem>
+
 <listitem id="slonyuser"> <para> &slony1;-specific user names. </para>
 
 <para> It has proven useful to define a <command>slony</command> user
@@ -247,6 +336,23 @@
 configuration. </para>
 </listitem>
 
+<listitem><para> Use <filename>test_slony_state.pl</filename> to look
+for configuration problems.</para>
+
+<para>This is a Perl script which connects to a &slony1; node and then
+rummages through &slony1; configuration looking for quite a variety of
+conditions that tend to indicate problems, including:
+<itemizedlist>
+<listitem><para>Bloating of some config tables</para></listitem>
+<listitem><para>Analysis of listen paths</para></listitem>
+<listitem><para>Analysis of event propagation and confirmation</para></listitem>
+</itemizedlist></para>
+
+<para> If replication mysteriously <quote>isn't working</quote>, this
+tool can run through many of the possible problems for you. </para>
+
+</listitem>
+
 <listitem>
 <para> Configuring &lslon; </para> 
 
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.18
retrieving revision 1.19
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.18 -r1.19
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -2,6 +2,8 @@
 <sect1 id="concepts">
 <title>&slony1; Concepts</title>
 
+<indexterm><primary>concepts and terminology</primary></indexterm>
+
 <para>In order to set up a set of &slony1; replicas, it is necessary
 to understand the following major abstractions that it uses:</para>
 
@@ -112,6 +114,8 @@
 
 <sect2><title>slon Daemon</title>
 
+<indexterm><primary>slon daemon</primary></indexterm>
+
 <para>For each node in the cluster, there will be a <xref
 linkend="slon"> process to manage replication activity for that node.
 </para>
@@ -138,6 +142,8 @@
 
 <sect2><title>slonik Configuration Processor</title>
 
+<indexterm><primary>slonik configuration processor</primary></indexterm>
+
 <para> The <xref linkend="slonik"> command processor processes scripts
 in a <quote>little language</quote> that are used to submit events to
 update the configuration of a &slony1; cluster.  This includes such
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.18
retrieving revision 1.19
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.18 -r1.19
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -1,6 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="maintenance"> <title>&slony1; Maintenance</title>
 
+<indexterm><primary>maintaining &slony1;</primary></indexterm>
+
 <para>&slony1; actually does a lot of its necessary maintenance
 itself, in a <quote>cleanup</quote> thread:
 
@@ -49,6 +51,8 @@
 
 <sect2><title> Watchdogs: Keeping Slons Running</title>
 
+<indexterm><primary>watchdogs to keep slon daemons running</primary></indexterm>
+
 <para>There are a couple of <quote>watchdog</quote> scripts available
 that monitor things, and restart the <application>slon</application>
 processes should they happen to die for some reason, such as a network
@@ -104,6 +108,8 @@
 
 <sect2><title>Testing &slony1; State </title>
 
+<indexterm><primary>testing cluster status</primary></indexterm>
+
 <para> In the <filename>tools</filename> directory, you may find
 scripts called <filename>test_slony_state.pl</filename> and
 <filename>test_slony_state-dbi.pl</filename>.  One uses the Perl/DBI
@@ -251,6 +257,8 @@
 
 <sect2><title> Log Files</title>
 
+<indexterm><primary>log files</primary></indexterm>
+
 <para><xref linkend="slon"> daemons generate some more-or-less verbose
 log files, depending on what debugging level is turned on.  You might
 assortedly wish to:
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.18
retrieving revision 1.19
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.18 -r1.19
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -1,7 +1,7 @@
 <!-- $Id$ -->
 <sect1 id="firstdb"><title>Replicating Your First Database</title>
 
-<indexterm><primary>replicating a first database</primary></indexterm>
+<indexterm><primary>replicating your first database</primary></indexterm>
 
 <para>In this example, we will be replicating a brand new
 <application>pgbench</application> database.  The mechanics of
Index: usingslonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/usingslonik.sgml,v
retrieving revision 1.15
retrieving revision 1.16
diff -Ldoc/adminguide/usingslonik.sgml -Ldoc/adminguide/usingslonik.sgml -u -w -r1.15 -r1.16
--- doc/adminguide/usingslonik.sgml
+++ doc/adminguide/usingslonik.sgml
@@ -1,6 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="usingslonik"> <title>Using Slonik</title>
 
+<indexterm><primary>using slonik</primary></indexterm>
+
 <para> It's a bit of a pain writing <application>Slonik</application>
 scripts by hand, particularly as you start working with &slony1;
 clusters that may be comprised of increasing numbers of nodes and
Index: reshape.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v
retrieving revision 1.17
retrieving revision 1.18
diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.17 -r1.18
--- doc/adminguide/reshape.sgml
+++ doc/adminguide/reshape.sgml
@@ -1,6 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="reshape"> <title>Reshaping a Cluster</title>
 
+<indexterm><primary>reshaping replication</primary></indexterm>
+
 <para>If you rearrange the nodes so that they serve different
 purposes, this will likely lead to the subscribers changing a bit.</para>
 
Index: startslons.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/startslons.sgml,v
retrieving revision 1.15
retrieving revision 1.16
diff -Ldoc/adminguide/startslons.sgml -Ldoc/adminguide/startslons.sgml -u -w -r1.15 -r1.16
--- doc/adminguide/startslons.sgml
+++ doc/adminguide/startslons.sgml
@@ -1,6 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="slonstart"> <title>Slon daemons</title>
 
+<indexterm><primary>running slon</primary></indexterm>
+
 <para>The programs that actually perform &slony1; replication are the
 <application>slon</application> daemons.</para>
 
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -2,6 +2,9 @@
 <sect1 id="addthings">
 <title>Adding Things to Replication</title>
 
+<indexterm><primary>adding objects to replication</primary></indexterm>
+
+
 <para>You may discover that you have missed replicating things that
 you wish you were replicating.</para>
 
Index: intro.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v
retrieving revision 1.23
retrieving revision 1.24
diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.23 -r1.24
--- doc/adminguide/intro.sgml
+++ doc/adminguide/intro.sgml
@@ -1,6 +1,7 @@
 <!-- $Id$ -->
 <sect1 id="introduction">
 <title>Introduction to &slony1;</title>
+
 <sect2> <title>What &slony1; is</title>
 
 <para>&slony1; is a <quote>master to multiple slaves</quote>
@@ -133,6 +134,8 @@
 
 <sect2><title> Current Limitations</title>
 
+<indexterm><primary>limitations to &slony1;</primary></indexterm>
+
 <para>&slony1; does not automatically propagate schema changes, nor
 does it have any ability to replicate large objects.  There is a
 single common reason for these limitations, namely that &slony1;
@@ -190,6 +193,8 @@
 
 <sect2><title>Replication Models</title>
 
+<indexterm><primary>replication models</primary></indexterm>
+
 <para>There are a number of distinct models for database replication;
 it is impossible for one replication system to be all things to all
 people.</para>
Index: supportedplatforms.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/supportedplatforms.sgml,v
retrieving revision 1.5
retrieving revision 1.6
diff -Ldoc/adminguide/supportedplatforms.sgml -Ldoc/adminguide/supportedplatforms.sgml -u -w -r1.5 -r1.6
--- doc/adminguide/supportedplatforms.sgml
+++ doc/adminguide/supportedplatforms.sgml
@@ -1,6 +1,8 @@
 <article id="supportedplatforms">
 <title>&slony1; Supported Platforms</title>
 
+<indexterm><primary>platforms supported</primary></indexterm>
+
 <para>
 Slony-I has been verified by to work on the platforms which are listed 
 below. Slony-I can be successfully built, installed and run on these 
Index: failover.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.20
retrieving revision 1.21
diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.20 -r1.21
--- doc/adminguide/failover.sgml
+++ doc/adminguide/failover.sgml
@@ -1,6 +1,10 @@
 <!-- $Id$ -->
 <sect1 id="failover">
 <title>Doing switchover and failover with &slony1;</title>
+<indexterm><primary>failover</primary>
+           <secondary>switchover</secondary>
+</indexterm>
+
 <sect2><title>Foreword</title>
 
 <para>&slony1; is an asynchronous replication system.  Because of
@@ -28,7 +32,7 @@
 <sect2><title> Controlled Switchover</title>
 
 <indexterm>
- <primary>Controlled switchover</primary>
+ <primary>controlled switchover</primary>
 </indexterm>
 
 <para> We assume a current <quote>origin</quote> as node1 with one
@@ -89,7 +93,7 @@
 <sect2><title> Failover</title>
 
 <indexterm>
- <primary>failover upon system failure</primary>
+ <primary>failover due to system failure</primary>
 </indexterm>
 
 <para> If some more serious problem occurs on the
@@ -178,6 +182,8 @@
 
 <sect2><title> Automating <command> FAIL OVER </command> </title>
 
+<indexterm><primary>automating failover</primary></indexterm>
+
 <para> If you do choose to automate <command>FAIL OVER </command>, it
 is important to do so <emphasis>carefully.</emphasis> You need to have
 good assurance that the failed node is well and truly failed, and you
@@ -212,7 +218,9 @@
 </sect2>
 
 <sect2 id="rebuildnode1"><title>After Failover, Reconfiguring
-node 1</title>
+Former Origin</title>
+
+<indexterm><primary>rebuilding after failover</primary></indexterm>
 
 <para> What happens to the failed node will depend somewhat on the
 nature of the catastrophe that lead to needing to fail over to another
Index: installation.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v
retrieving revision 1.23
retrieving revision 1.24
diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.23 -r1.24
--- doc/adminguide/installation.sgml
+++ doc/adminguide/installation.sgml
@@ -2,6 +2,8 @@
 <sect1 id="installation">
 <title>&slony1; Installation</title>
 
+<indexterm><primary>installation instructions</primary></indexterm>
+
 <note> <para>For &windows; users: Unless you are planning on hacking
 the &slony1; code, it is highly recommended that you download and
 install a prebuilt binary distribution and jump straight to the
@@ -57,6 +59,8 @@
 <sect2>
 <title>Configuration</title>
 
+<indexterm><primary>configuration instructions</primary></indexterm>
+
 <para> &slony1; normally needs to be built and installed by the
 &postgres; Unix user.  The installation target must be identical to
 the existing &postgres; installation particularly in view of the fact
@@ -74,8 +78,9 @@
 base certain parts needed for platform portability.  It now only needs
 to make reference to parts of &postgres; that are actually part of the
 installation.  Therefore, &slony1; is configured by pointing it to the
-various library, binary, and include directories.  For a full list of
-these options, use the command <command>./configure --help</command>
+various &postgres; library, binary, and include directories.  For a
+full list of these options, use the command <command>./configure
+--help</command>
 </para>
 
 <para>On certain platforms (AIX and Solaris are known to need this;
@@ -160,7 +165,7 @@
 </sect2>
 
 <sect2>
-<title> Installing &slony1;</title>
+<title> Installing &slony1 Once Built;</title>
 
 <para> To install &slony1;, enter
 
@@ -277,6 +282,8 @@
 <sect2>
 <title> Installing the &slony1; service on &windows;</title>
 
+<indexterm><primary>installing &slony1; on &windows;</primary></indexterm>
+
 <para> On &windows; systems, instead of running one <xref
 linkend="slon"> daemon per node, a single slon service is installed
 which can then be controlled through the <command>Services</command>
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.22
retrieving revision 1.23
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.22 -r1.23
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -2,6 +2,8 @@
 <sect1 id="monitoring">
 <title>Monitoring</title>
 
+<indexterm><primary>monitoring &slony1;</primary></indexterm>
+
 <para>Here are some of things that you may find in your &slony1; logs,
 and explanations of what they mean.</para>
 
@@ -152,7 +154,9 @@
 </para>
 </sect2>
 
-<sect2> <title> &nagios& Replication Checks </title>
+<sect2> <title> &nagios; Replication Checks </title>
+
+<indexterm><primary>&nagios; for monitoring replication</primary></indexterm>
 
 <para> The script in the <filename>tools</filename> directory called
 <command> pgsql_replication_check.pl </command> represents some of the
@@ -207,6 +211,8 @@
 
 <sect2 id="slonymrtg"> <title> Monitoring &slony1; using MRTG </title>
 
+<indexterm><primary>MRTG for monitoring replication</primary></indexterm>
+
 <para> One user reported on the &slony1; mailing list how to configure
 <ulink url="http://people.ee.ethz.ch/~oetiker/webtools/mrtg/">
 <application> mrtg - Multi Router Traffic Grapher </application>
Index: plainpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/plainpaths.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/plainpaths.sgml -Ldoc/adminguide/plainpaths.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/plainpaths.sgml
+++ doc/adminguide/plainpaths.sgml
@@ -1,6 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="plainpaths"><title> &slony1; Path Communications</title>
 
+<indexterm><primary>communication paths</primary></indexterm>
+
 <para> &slony1; uses &postgres; DSNs in three contexts to establish
 access to databases:
 
Index: dropthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v
retrieving revision 1.13
retrieving revision 1.14
diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.13 -r1.14
--- doc/adminguide/dropthings.sgml
+++ doc/adminguide/dropthings.sgml
@@ -1,11 +1,15 @@
 <!-- $Id$ -->
 <sect1 id="dropthings"> <title>Dropping things from &slony1; Replication</title>
 
+<indexterm><primary>dropping objects from replication</primary></indexterm>
+
 <para>There are several things you might want to do involving dropping
 things from &slony1; replication.</para>
 
 <sect2><title>Dropping A Whole Node</title>
 
+<indexterm><primary>dropping a node from replication</primary></indexterm>
+
 <para>If you wish to drop an entire node from replication, the <xref
 linkend="slonik"> command <xref linkend="stmtdropnode"> should do the
 trick.</para>
@@ -32,6 +36,8 @@
 
 <sect2><title>Dropping An Entire Set</title>
 
+<indexterm><primary>dropping a set from replication</primary></indexterm>
+
 <para>If you wish to stop replicating a particular replication set,
 the <xref linkend="slonik"> command <xref linkend="stmtdropset"> is
 what you need to use.</para>
@@ -54,6 +60,8 @@
 
 <sect2><title>Unsubscribing One Node From One Set</title>
 
+<indexterm><primary>unsubscribing a node from a set</primary></indexterm>
+
 <para>The <xref linkend="stmtunsubscribeset"> operation is a little
 less invasive than either <xref linkend="stmtdropset"> or <xref
 linkend="stmtdropnode">; it involves dropping &slony1; triggers and
@@ -73,7 +81,9 @@
 </para>
 
 </sect2>
-<sect2><title> Dropping A Table From A Set</title>
+<sect2><title> Dropping A Table From Replication</title>
+
+<indexterm><primary>dropping a table from replication</primary></indexterm>
 
 <para>In &slony1; 1.0.5 and above, there is a Slonik command <xref
 linkend="stmtsetdroptable"> that allows dropping a single table from
@@ -107,7 +117,9 @@
 to connect to each database and submit the queries by hand.</para>
 </sect2>
 
-<sect2><title>Dropping A Sequence From A Set</title>
+<sect2><title>Dropping A Sequence From Replication</title>
+
+<indexterm><primary>dropping a sequence from replication</primary></indexterm>
 
 <para>Just as with <xref linkend="stmtsetdroptable">, version 1.0.5
 introduces the operation <xref linkend="stmtsetdropsequence">.</para>
Index: testbed.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/testbed.sgml,v
retrieving revision 1.7
retrieving revision 1.8
diff -Ldoc/adminguide/testbed.sgml -Ldoc/adminguide/testbed.sgml -u -w -r1.7 -r1.8
--- doc/adminguide/testbed.sgml
+++ doc/adminguide/testbed.sgml
@@ -1,6 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="testbed"><title> &slony1; Test Bed Framework </title>
 
+<indexterm><primary>test bed framework</primary></indexterm>
+
 <para> As of version 1.1.5, &slony1; has a common test bed framework
 intended to better support performing a comprehensive set of tests.
 The code lives in the source tree under the <filename> tests



More information about the Slony1-commit mailing list