CVS User Account cvsuser
Tue Dec 14 15:22:57 PST 2004
Log Message:
-----------
Enormous SGML cleanup/improvement

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.sgml (r1.2 -> r1.3)
        adminscripts.sgml (r1.3 -> r1.4)
        cluster.sgml (r1.3 -> r1.4)
        concepts.sgml (r1.3 -> r1.4)
        ddlchanges.sgml (r1.2 -> r1.3)
        defineset.sgml (r1.4 -> r1.5)
        dropthings.sgml (r1.3 -> r1.4)
        failover.sgml (r1.3 -> r1.4)
        faq.sgml (r1.2 -> r1.3)
        firstdb.sgml (r1.4 -> r1.5)
        help.sgml (r1.4 -> r1.5)
        installation.sgml (r1.3 -> r1.4)
        listenpaths.sgml (r1.3 -> r1.4)
        maintenance.sgml (r1.3 -> r1.4)
        monitoring.sgml (r1.3 -> r1.4)
        prerequisites.sgml (r1.2 -> r1.3)
        reshape.sgml (r1.3 -> r1.4)
        slon.sgml (r1.1 -> r1.2)
        slonik.sgml (r1.3 -> r1.4)
        startslons.sgml (r1.2 -> r1.3)
        subscribenodes.sgml (r1.3 -> r1.4)
        addthings.html (r1.2 -> r1.3)
        cluster.html (r1.2 -> r1.3)
        concepts.html (r1.2 -> r1.3)
        ddlchanges.html (r1.2 -> r1.3)
        dropthings.html (r1.2 -> r1.3)
        failover.html (r1.2 -> r1.3)
        faq.html (r1.2 -> r1.3)
        firstdb.html (r1.2 -> r1.3)
        help.html (r1.2 -> r1.3)
        installation.html (r1.2 -> r1.3)
        listenpaths.html (r1.2 -> r1.3)
        maintenance.html (r1.2 -> r1.3)
        monitoring.html (r1.2 -> r1.3)
        requirements.html (r1.2 -> r1.3)
        reshape.html (r1.2 -> r1.3)
        slonstart.html (r1.2 -> r1.3)
        slony.html (r1.2 -> r1.3)
        subscribenodes.html (r1.2 -> r1.3)

Added Files:
-----------
    slony1-engine/doc/adminguide:
        app-slon.html (r1.1)
        app-slonik.html (r1.1)
        definingsets.html (r1.1)
        slony-commands.html (r1.1)
        slonyadmin.html (r1.1)
        slonyintro.html (r1.1)

-------------- next part --------------
Index: reshape.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/reshape.html -Ldoc/adminguide/reshape.html -u -w -r1.2 -r1.3
--- doc/adminguide/reshape.html
+++ doc/adminguide/reshape.html
@@ -92,40 +92,81 @@
 ><LI
 ><P
 > If you want a node that is a subscriber to become the
-"master" provider for a particular replication set, you will have to
-issue the slonik MOVE SET operation to change that "master" provider
-node.&#13;</P
+origin for a particular replication set, you will have to issue a
+suitable <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> <TT
+CLASS="COMMAND"
+>MOVE SET</TT
+>
+operation.&#13;</P
 ></LI
 ><LI
 ><P
 > You may subsequently, or instead, wish to modify the
 subscriptions of other nodes.  You might want to modify a node to get
 its data from a different provider, or to change it to turn forwarding
-on or off.  This can be accomplished by issuing the slonik SUBSCRIBE
-SET operation with the new subscription information for the node;
-Slony-I will change the configuration.&#13;</P
+on or off.  This can be accomplished by issuing the slonik
+<TT
+CLASS="COMMAND"
+>SUBSCRIBE SET</TT
+> operation with the new subscription
+information for the node; <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> will change the
+configuration.&#13;</P
 ></LI
 ><LI
 ><P
 > If the directions of data flows have changed, it is
-doubtless appropriate to issue a set of DROP LISTEN operations to drop
-out obsolete paths between nodes and SET LISTEN to add the new ones.
-At present, this is not changed automatically; at some point, MOVE SET
-and SUBSCRIBE SET might change the paths as a side-effect.  See
-SlonyListenPaths for more information about this.  In version 1.1 and
-later, it is likely that the generation of sl_listen entries will be
-entirely automated, where they will be regenerated when changes are
-made to sl_path or sl_subscribe, thereby making it unnecessary to even
-think about SET LISTEN.&#13;</P
+doubtless appropriate to issue a set of <TT
+CLASS="COMMAND"
+>DROP LISTEN</TT
+>
+operations to drop out obsolete paths between nodes and <TT
+CLASS="COMMAND"
+>SET
+LISTEN</TT
+> to add the new ones.  At present, this is not changed
+automatically; at some point, <TT
+CLASS="COMMAND"
+>MOVE SET</TT
+> and
+<TT
+CLASS="COMMAND"
+>SUBSCRIBE SET</TT
+> might change the paths as a side-effect.  See
+<A
+HREF="listenpaths.html"
+> Slony Listen Paths </A
+> for more
+information about this.  In version 1.1 and later, it is likely that
+the generation of sl_listen entries will be entirely automated, where
+they will be regenerated when changes are made to sl_path or
+sl_subscribe, thereby making it unnecessary to even think about
+<TT
+CLASS="COMMAND"
+>SET LISTEN</TT
+>.&#13;</P
 ></LI
 ></UL
 >&#13;</P
 ><P
-> The "altperl" toolset includes a "init_cluster.pl" script that
-is quite up to the task of creating the new SET LISTEN commands; it
-isn't smart enough to know what listener paths should be dropped.
-
- </P
+> The <TT
+CLASS="FILENAME"
+>altperl</TT
+> toolset includes a
+<B
+CLASS="APPLICATION"
+>init_cluster.pl</B
+> script that is quite up to the task of
+creating the new <TT
+CLASS="COMMAND"
+>SET LISTEN</TT
+> commands; it isn't, however,
+smart enough to know what listener paths should be dropped.</P
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
Index: requirements.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/requirements.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/requirements.html -Ldoc/adminguide/requirements.html -u -w -r1.2 -r1.3
--- doc/adminguide/requirements.html
+++ doc/adminguide/requirements.html
@@ -82,7 +82,10 @@
 ></H1
 ><P
 >Any platform that can run PostgreSQL should be able to run
-Slony-I.&#13;</P
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>.&#13;</P
 ><P
 >The platforms that have received specific testing at the time of
 this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha,
@@ -96,12 +99,16 @@
 >&#8482;-2.9-SPARC, AIX 5.1
 and OpenBSD-3.5-sparc64.&#13;</P
 ><P
->There have been reports of success at running Slony-I hosts that
-are running PostgreSQL on Microsoft <SPAN
+>There have been reports of success at running
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> hosts that are running PostgreSQL
+on Microsoft <SPAN
 CLASS="TRADEMARK"
 >Windows</SPAN
->&#8482;.  At this
-time, the <SPAN
+>&#8482;.  At this time, the
+<SPAN
 CLASS="QUOTE"
 >"binary"</SPAN
 > applications (<SPAN
@@ -113,17 +120,27 @@
 > -
 <B
 CLASS="APPLICATION"
->slonik</B
->, <B
+><A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+></B
+>,
+<B
 CLASS="APPLICATION"
->slon</B
->) do not run on
-<SPAN
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+>) do not
+run on <SPAN
 CLASS="TRADEMARK"
 >Windows</SPAN
 >&#8482;, but a <B
 CLASS="APPLICATION"
->slon</B
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
 > running on one of the
 Unix-like systems has no reason to have difficulty connect to a
 PostgreSQL instance running on <SPAN
@@ -133,11 +150,16 @@
 ><P
 > It ought to be possible to port <B
 CLASS="APPLICATION"
->slon</B
->
-and <B
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+> and <B
 CLASS="APPLICATION"
->slonik</B
+><A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+></B
 > to run on
 <SPAN
 CLASS="TRADEMARK"
@@ -149,10 +171,13 @@
 > implementation for
 <B
 CLASS="APPLICATION"
->slon</B
->, as it uses that to have multiple
-threads of execution.  There are reports of there being a
-<TT
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+>, as it
+uses that to have multiple threads of execution.  There are reports of
+there being a <TT
 CLASS="FILENAME"
 >pthreads</TT
 > library for
@@ -166,7 +191,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN109"
+NAME="AEN117"
 >2.1. Software needed</A
 ></H2
 ><P
@@ -179,18 +204,18 @@
 make is often installed under the name <TT
 CLASS="COMMAND"
 >gmake</TT
->; this document
-will therefore always refer to it by that name. (On Linux-based
-systems GNU make is typically the default make, and is called
-<TT
+>; this
+document will therefore always refer to it by that name. (On
+Linux-based systems GNU make is typically the default make, and is
+called <TT
 CLASS="COMMAND"
 >make</TT
->) To test to see if your make is GNU make enter
-<TT
+>) To test to see if your make is GNU
+make enter <TT
 CLASS="COMMAND"
 >make version</TT
->.  Version 3.76 or later will suffice; previous
-versions may not.&#13;</P
+>.  Version 3.76 or later
+will suffice; previous versions may not.&#13;</P
 ></LI
 ><LI
 ><P
@@ -209,22 +234,33 @@
 CLASS="EMPHASIS"
 >source</I
 ></SPAN
->.  Slony-I depends on namespace support so you must
-have version 7.3.3 or newer to be able to build and use Slony-I.  Rod
+>.  <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+depends on namespace support so you must have version 7.3.3 or newer
+to be able to build and use <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>.  Rod
 Taylor has <SPAN
 CLASS="QUOTE"
 >"hacked up"</SPAN
-> a version of Slony-I that works with
-version 7.2; if you desperately need that, look for him on the <A
-HREF="http://www.postgresql.org/lists.html"
-TARGET="_top"
-> PostgreSQL Hackers mailing
-list</A
->.  It is not anticipated that 7.2 will be supported by any
+> a version of
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> that works with version 7.2; if you
+desperately need that, look for him on the PostgreSQL Hackers mailing
+list.  It is not anticipated that 7.2 will be supported by any
 official <B
 CLASS="APPLICATION"
->Slony-I</B
-> release.&#13;</P
+><SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+></B
+>
+release.&#13;</P
 ></LI
 ><LI
 ><P
@@ -243,16 +279,11 @@
 ><LI
 ><P
 > If you need to obtain PostgreSQL source, you can
-download it from your favorite PostgreSQL mirror (see <A
+download it from your favorite PostgreSQL mirror.  See <A
 HREF="http://www.postgresql.org/mirrors-www.html"
 TARGET="_top"
 >http://www.postgresql.org/mirrors-www.html </A
-> for a list), or
-via <A
-HREF="http://bt.postgresql.org/"
-TARGET="_top"
-> BitTorrent</A
->.</P
+> for a list.</P
 ></LI
 ></UL
 >&#13;</P
@@ -266,11 +297,19 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN136"
->2.2. Getting Slony-I Source</A
+NAME="AEN146"
+>2.2. Getting <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+Source</A
 ></H2
 ><P
->You can get the Slony-I source from <A
+>You can get the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> source from
+<A
 HREF="http://developer.postgresql.org/~wieck/slony1/download/"
 TARGET="_top"
 >http://developer.postgresql.org/~wieck/slony1/download/</A
@@ -281,7 +320,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN140"
+NAME="AEN152"
 >2.3. Time Synchronization</A
 ></H2
 ><P
@@ -292,15 +331,19 @@
 all nodes, with subscriber nodes using the <SPAN
 CLASS="QUOTE"
 >"master"</SPAN
-> provider
-node as their time server.&#13;</P
+>
+provider node as their time server.&#13;</P
 ><P
-> It is possible for Slony-I to function even in the face of
-there being some time discrepancies, but having systems <SPAN
-CLASS="QUOTE"
->"in
-sync"</SPAN
-> is usually pretty important for distributed applications.&#13;</P
+> It is possible for <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> to
+function even in the face of there being some time discrepancies, but
+having systems <SPAN
+CLASS="QUOTE"
+>"in sync"</SPAN
+> is usually pretty important for
+distributed applications.&#13;</P
 ><P
 > See <A
 HREF="http://www.ntp.org/"
@@ -314,7 +357,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN148"
+NAME="AEN161"
 >2.4. Network Connectivity</A
 ></H2
 ><P
@@ -325,70 +368,89 @@
 CLASS="EMPHASIS"
 >bidirectional</I
 ></SPAN
-> network communications to the
-PostgreSQL instances.  That is, if node B is replicating data from
-node A, it is necessary that there be a path from A to B and from B to
-A.  It is recommended that all nodes in a Slony-I cluster allow this
-sort of bidirection communications from any node in the cluster to any
-other node in the cluster.&#13;</P
+> network communications
+to the PostgreSQL instances.  That is, if node B is replicating data
+from node A, it is necessary that there be a path from A to B and from
+B to A.  It is recommended that all nodes in a
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster allow this sort of
+bidirection communications from any node in the cluster to any other
+node in the cluster.&#13;</P
 ><P
 >Note that the network addresses need to be consistent across all
-of the nodes.  Thus, if there is any need to use a <SPAN
+of the nodes.  Thus, if there is any need to use a
+<SPAN
 CLASS="QUOTE"
 >"public"</SPAN
->
-address for a node, to allow remote/VPN access, that <SPAN
+> address for a node, to allow remote/VPN access,
+that <SPAN
 CLASS="QUOTE"
 >"public"</SPAN
+> address needs to be able to be used
+consistently throughout the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
 >
-address needs to be able to be used consistently throughout the
-Slony-I cluster, as the address is propagated throughout the cluster
-in table <CODE
+cluster, as the address is propagated throughout the cluster in table
+<CODE
 CLASS="ENVAR"
 >sl_path</CODE
 >.&#13;</P
 ><P
 >A possible workaround for this, in environments where firewall
-rules are particularly difficult to implement, may be to establish
-SSH Tunnels that are created on each host that allow remote access
-through IP address 127.0.0.1, with a different port for each
-destination.&#13;</P
+rules are particularly difficult to implement, may be to establish SSH
+Tunnels that are created on each host that allow remote access through
+IP address 127.0.0.1, with a different port for each destination.&#13;</P
 ><P
 > Note that <B
 CLASS="APPLICATION"
 >slonik</B
-> and the <B
+> and the
+<B
 CLASS="APPLICATION"
 >slon</B
->
-instances need no special connections or protocols to communicate with
-one another; they just need to be able to get access to the
-<B
+> instances need no special connections
+or protocols to communicate with one another; they just need to be
+able to get access to the <B
 CLASS="APPLICATION"
 >PostgreSQL</B
-> databases, connecting as a <SPAN
+>
+databases, connecting as a <SPAN
 CLASS="QUOTE"
 >"superuser"</SPAN
 >.&#13;</P
 ><P
 > An implication of the communications model is that the entire
-extended network in which a Slony-I cluster operates must be able to
-be treated as being secure.  If there is a remote location where you
-cannot trust the Slony-I node to be considered <SPAN
+extended network in which a <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster
+operates must be able to be treated as being secure.  If there is a
+remote location where you cannot trust the
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> node to be considered
+<SPAN
 CLASS="QUOTE"
 >"secured,"</SPAN
-> this
-represents a vulnerability that adversely the security of the entire
-cluster.  In effect, the security policies throughout the cluster can
-only be considered as stringent as those applied at the
-<SPAN
+> this represents a vulnerability that adversely
+the security of the entire cluster.  In effect, the security policies
+throughout the cluster can only be considered as stringent as those
+applied at the <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >weakest</I
 ></SPAN
-> link.  Running a full-blown Slony-I node at a
-branch location that can't be kept secure compromises security for the
+> link.  Running a
+full-blown <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> node at a branch
+location that can't be kept secure compromises security for the
 cluster.&#13;</P
 ><P
 >In the future plans is a feature whereby updates for a
@@ -396,23 +458,27 @@
 <SPAN
 CLASS="QUOTE"
 >"log shipping."</SPAN
-> The data stored in sl_log_1 and sl_log_2 would
-be written out to log files on disk.  These files could be transmitted
-in any manner desired, whether via scp, FTP, burning them onto
-DVD-ROMs and mailing them, or even by recording them on a USB
+> The data stored in sl_log_1 and sl_log_2
+would be written out to log files on disk.  These files could be
+transmitted in any manner desired, whether via scp, FTP, burning them
+onto DVD-ROMs and mailing them, or even by recording them on a USB
 <SPAN
 CLASS="QUOTE"
 >"flash device"</SPAN
-> and attaching them to birds, allowing a sort of
-<SPAN
+> and attaching them to birds, allowing a
+sort of <SPAN
 CLASS="QUOTE"
 >"avian transmission protocol."</SPAN
-> This will allow one way
-communications so that <SPAN
+> This will allow
+one way communications so that <SPAN
 CLASS="QUOTE"
 >"subscribers"</SPAN
-> that use log shipping would
-have no need for access to other Slony-I nodes.</P
+> that use log
+shipping would have no need for access to other
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> nodes.</P
 ></DIV
 ></DIV
 ><DIV
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.4
retrieving revision 1.5
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.4 -r1.5
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -14,23 +14,23 @@
 
 <listitem><para> Tables that are to be replicated</para></listitem>
 
-<listitem><para> Sequences that are to be
-replicated</para></listitem>
+<listitem><para> Sequences that are to be replicated</para></listitem>
 </itemizedlist>
 
 <sect2><title> Primary Keys</title>
 
-<para>Slony-I <emphasis>needs</emphasis> to have a primary key on each
-table that is replicated.  PK values are used as the primary
-identifier for each tuple that is modified in the source system.
-There are three ways that you can get Slony-I to use a primary key:
+<para><productname/Slony-I/ <emphasis>needs</emphasis> to have a
+primary key or candidate thereof on each table that is replicated.  PK
+values are used as the primary identifier for each tuple that is
+modified in the source system.  There are three ways that you can get
+<productname/Slony-I/ to use a primary key:
 
 <itemizedlist>
 
 <listitem><para> If the table has a formally identified primary key,
-<command>SET ADD TABLE</command> can be used without any need to reference the
-primary key.  <application>Slony-I</application> will pick up that there is a
-primary key, and use it.
+<command>SET ADD TABLE</command> can be used without any need to
+reference the primary key.  <productname/Slony-I/ will pick up that
+there is a primary key, and use it.
 
 <listitem><para> If the table hasn't got a primary key, but has some
 <emphasis>candidate</emphasis> primary key, that is, some index on a
@@ -49,24 +49,24 @@
 key, as it infers the namespace from the table.
 
 <listitem><para> If the table hasn't even got a candidate primary key,
-you can ask Slony-I to provide one.  This is done by first using
-<command>TABLE ADD KEY</command> to add a column populated using a
-Slony-I sequence, and then having the <command>SET ADD TABLE</command>
-include the directive <option>key=serial</option>, to indicate that
-<application>Slony-I</application>'s own column should be
-used.</para></listitem>
+you can ask <productname/Slony-I/ to provide one.  This is done by
+first using <command>TABLE ADD KEY</command> to add a column populated
+using a <productname/Slony-I/ sequence, and then having the
+<command>SET ADD TABLE</command> include the directive
+<option>key=serial</option>, to indicate that <productname/Slony-I/'s
+own column should be used.</para></listitem>
 
 </itemizedlist>
 
 <para> It is not terribly important whether you pick a
 <quote>true</quote> primary key or a mere <quote>candidate primary
 key;</quote> it is, however, recommended that you have one of those
-instead of having Slony-I populate the PK column for you.  If you
-don't have a suitable primary key, that means that the table hasn't
-got any mechanism from your application's standpoint of keeping values
-unique.  Slony-I may therefore introduce a new failure mode for your
-application, and this implies that you had a way to enter confusing
-data into the database.</para>
+instead of having <productname/Slony-I/ populate the PK column for
+you.  If you don't have a suitable primary key, that means that the
+table hasn't got any mechanism from your application's standpoint of
+keeping values unique.  <productname/Slony-I/ may therefore introduce
+a new failure mode for your application, and this implies that you had
+a way to enter confusing data into the database.</para>
 
 <sect2><title> Grouping tables into sets</title>
 
@@ -75,15 +75,18 @@
 are thus related are <emphasis>not</emphasis> replicated together,
 you'll find yourself in trouble if you switch the <quote>master
 provider</quote> from one node to another, and discover that the new
-<quote>master</quote> can't be updated properly because it is missing the
-contents of dependent tables.</para>
+<quote>master</quote> can't be updated properly because it is missing
+the contents of dependent tables.</para>
 
 <para> If a database schema has been designed cleanly, it is likely
 that replication sets will be virtually synonymous with namespaces.
 All of the tables and sequences in a particular namespace will be
 sufficiently related that you will want to replicate them all.
-Conversely, tables found in different schemas will likely NOT be
-related, and therefore should be replicated in separate sets.</para>
+Conversely, tables found in different schemas will likely
+<emphasis/not/ be related, and therefore should be replicated in
+separate sets.</para>
+
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -1,12 +1,27 @@
 <sect1 id="altperl"><title/ Slony-I Administration Scripts/
 
-<para>In the "altperl" directory in the CVS tree, there is a sizable set of Perl scripts that may be used to administer a set of Slony-I instances, which support having arbitrary numbers of nodes.
-
-<para>Most of them generate Slonik scripts that are then to be passed on to the slonik utility to be submitted to all of the Slony-I nodes in a particular cluster.  At one time, this embedded running slonik on the slonik scripts.  Unfortunately, this turned out to be a pretty large calibre "foot gun," as minor typos on the command line led, on a couple of occasions, to pretty calamitous actions, so the behaviour has been changed so that the scripts simply submit output to standard output.  An administrator should review the slonik script before submitting it to Slonik.
+<para>In the <filename/altperl/ directory in the <application/CVS/
+tree, there is a sizable set of <application/Perl/ scripts that may be
+used to administer a set of <productname/Slony-I/ instances, which
+support having arbitrary numbers of nodes.
+
+<para>Most of them generate Slonik scripts that are then to be passed
+on to the <link linkend="slonik"> <application/slonik/ </link> utility
+to be submitted to all of the <productname/Slony-I/ nodes in a
+particular cluster.  At one time, this embedded running <link
+linkend="slonik"> slonik </link> on the slonik scripts.
+Unfortunately, this turned out to be a pretty large calibre
+<quote/foot gun,/ as minor typos on the command line led, on a couple
+of occasions, to pretty calamitous actions, so the behaviour has been
+changed so that the scripts simply submit output to standard output.
+An administrator should review the script <emphasis/before/ submitting
+it to <link linkend="slonik"> slonik </link>.
 
 <sect2><title> Node/Cluster Configuration - cluster.nodes</title>
 
-<para>The UNIX environment variable <envar/SLONYNODES/ is used to determine what Perl configuration file will be used to control the shape of the nodes in a Slony-I cluster.
+<para>The UNIX environment variable <envar/SLONYNODES/ is used to
+determine what Perl configuration file will be used to control the
+shape of the nodes in a <productname/Slony-I/ cluster.
 
 <para>What variables are set up...
 <itemizedlist>
@@ -17,7 +32,9 @@
 <listitem><Para> <envar/$APACHE_ROTATOR/="/opt/twcsds004/OXRS/apache/rotatelogs";  # If set, where to find Apache log rotator
 </itemizedlist>
 
-<para>You then define the set of nodes that are to be replicated using a set of calls to <function/add_node()/.
+<para>You then define the set of nodes that are to be replicated using
+a set of calls to <function/add_node()/.
+
 <para><command>
   add_node (host => '10.20.30.40', dbname => 'orglogs', port => 5437,
 			  user => 'postgres', node => 4, parent => 1);
@@ -39,17 +56,41 @@
      
 <sect2><title> Set configuration - cluster.set1, cluster.set2</title>
 
-<para>The UNIX environment variable <envar/SLONYSET/ is used to determine what Perl configuration file will be used to determine what objects will be contained in a particular replication set.
-
-<para>Unlike <envar/SLONYNODES/, which is essential for <emphasis/all/ of the slonik-generating scripts, this only needs to be set when running <filename/create_set.pl/, as that is the only script used to control what tables will be in a particular replication set.
+<para>The UNIX environment variable <envar/SLONYSET/ is used to
+determine what Perl configuration file will be used to determine what
+objects will be contained in a particular replication set.
+
+<para>Unlike <envar>SLONYNODES</envar>, which is essential for
+<emphasis>all</emphasis> of the <link linkend="slonik"> slonik
+</link>-generating scripts, this only needs to be set when running
+<filename>create_set.pl</filename>, as that is the only script used to
+control what tables will be in a particular replication set.</para>
 
 <para>What variables are set up...
 <itemizedlist>
-<listitem><Para> $TABLE_ID = 44;	 Each table must be identified by a unique number; this variable controls where numbering starts
-<listitem><Para> @PKEYEDTABLES		An array of names of tables to be replicated that have a defined primary key so that Slony-I can automatically select its key
-<listitem><Para> %KEYEDTABLES		 A hash table of tables to be replicated, where the hash index is the table name, and the hash value is the name of a unique not null index suitable as a "candidate primary key."
-<listitem><Para> @SERIALTABLES		An array of names of tables to be replicated that have no candidate for primary key.  Slony-I will add a key field based on a sequence that Slony-I generates
-<listitem><Para> @SEQUENCES			An array of names of sequences that are to be replicated
+<listitem><Para> $TABLE_ID = 44;	 
+<para> Each table must be identified by a unique number; this variable controls where numbering starts
+<listitem><Para> @PKEYEDTABLES		
+
+<para> An array of names of tables to be replicated that have a
+defined primary key so that Slony-I can automatically select its key
+
+<listitem><Para> %KEYEDTABLES
+
+<para> A hash table of tables to be replicated, where the hash index
+is the table name, and the hash value is the name of a unique not null
+index suitable as a "candidate primary key."
+
+<listitem><Para> @SERIALTABLES
+
+<para> An array of names of tables to be replicated that have no
+candidate for primary key.  Slony-I will add a key field based on a
+sequence that Slony-I generates
+
+<listitem><Para> @SEQUENCES
+
+<para> An array of names of sequences that are to be replicated
+
 </itemizedlist>
 
 <sect2><title/ build_env.pl/
@@ -64,9 +105,10 @@
 
 <sect2><title/ create_set.pl/
 
-<para>This requires <envar/SLONYSET/ to be set as well as <envar/SLONYNODES/; it is used to
-generate the Slonik script to set up a replication set consisting of a
-set of tables and sequences that are to be replicated.
+<para>This requires <envar/SLONYSET/ to be set as well as
+<envar/SLONYNODES/; it is used to generate the Slonik script to set up
+a replication set consisting of a set of tables and sequences that are
+to be replicated.
 
 <sect2><title/ drop_node.pl/
 
@@ -162,6 +204,8 @@
 functions.  This will typically be needed when you upgrade from one
 version of Slony-I to another.
 
+</sect1>
+
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
--- /dev/null
+++ doc/adminguide/slonyintro.html
@@ -0,0 +1,445 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slony-I Introduction</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="PREVIOUS"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="NEXT"
+TITLE=" Requirements"
+HREF="requirements.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="ARTICLE"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="slony.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="requirements.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="ARTICLE"
+><DIV
+CLASS="TITLEPAGE"
+><H1
+CLASS="TITLE"
+><A
+NAME="SLONYINTRO"
+>Slony-I Introduction</A
+></H1
+><HR></DIV
+><DIV
+CLASS="TOC"
+><DL
+><DT
+><B
+>Table of Contents</B
+></DT
+><DT
+>1. <A
+HREF="slonyintro.html#INTRODUCTION"
+>Introduction to Slony-I</A
+></DT
+><DT
+>2. <A
+HREF="requirements.html"
+>Requirements</A
+></DT
+><DT
+>3. <A
+HREF="installation.html"
+>Slony-I Installation</A
+></DT
+><DT
+>4. <A
+HREF="concepts.html"
+>Slony-I Concepts</A
+></DT
+><DT
+>5. <A
+HREF="cluster.html"
+>Defining Slony-I Clusters</A
+></DT
+><DT
+>6. <A
+HREF="definingsets.html"
+>Defining Slony-I Replication
+Sets</A
+></DT
+></DL
+></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="INTRODUCTION"
+>1. Introduction to Slony-I</A
+></H1
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN28"
+>1.1. Why yet another replication system?</A
+></H2
+><P
+>Slony-I was born from an idea to create a replication system that was not tied
+to a specific version of PostgreSQL, which is allowed to be started and stopped on
+an existing database with out the need for a dump/reload cycle.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN31"
+>1.2. What Slony-I is</A
+></H2
+><P
+>Slony-I is a <SPAN
+CLASS="QUOTE"
+>"master to multiple slaves"</SPAN
+> replication
+system supporting cascading and slave promotion.  The big picture for
+the development of Slony-I is as a master-slave system that includes
+all features and capabilities needed to replicate large databases to a
+reasonably limited number of slave systems.  <SPAN
+CLASS="QUOTE"
+>"Reasonable,"</SPAN
+> in this
+context, is probably no more than a few dozen servers.  If the number
+of servers grows beyond that, the cost of communications becomes
+prohibitively high.</P
+><P
+> See also <A
+HREF="slonyintro.html#SLONYLISTENERCOSTS"
+> SlonyListenerCosts</A
+> for a further analysis.</P
+><P
+> Slony-I is a system intended for data centers and backup sites,
+where the normal mode of operation is that all nodes are available all
+the time, and where all nodes can be secured.  If you have nodes that
+are likely to regularly drop onto and off of the network, or have
+nodes that cannot be kept secure, Slony-I may not be the ideal
+replication solution for you.</P
+><P
+> There are plans for a <SPAN
+CLASS="QUOTE"
+>"file-based log shipping"</SPAN
+>
+extension where updates would be serialized into files.  Given that,
+log files could be distributed by any means desired without any need
+of feedback between the provider node and those nodes subscribing via
+<SPAN
+CLASS="QUOTE"
+>"log shipping."</SPAN
+></P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN42"
+>1.3. Slony-I is not</A
+></H2
+><P
+>Slony-I is not a network management system.</P
+><P
+> Slony-I does not have any functionality within it to detect a
+node failure, or automatically promote a node to a master or other
+data origin.</P
+><P
+>Slony-I is not multi-master; it's not a connection broker, and
+it doesn't make you coffee and toast in the morning.</P
+><P
+>(That being said, the plan is for a subsequent system, Slony-II,
+to provide "multimaster" capabilities, and be "bootstrapped" using
+Slony-I.  But that is a separate project, and expectations for Slony-I
+should not be based on hopes for future projects.)</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN48"
+>1.4. Why doesn't Slony-I do automatic fail-over/promotion?</A
+></H2
+><P
+>This is the job of network monitoring software, not Slony.
+Every site's configuration and fail-over path is different.  For
+example, keep-alive monitoring with redundant NIC's and intelligent HA
+switches that guarantee race-condition-free takeover of a network
+address and disconnecting the <SPAN
+CLASS="QUOTE"
+>"failed"</SPAN
+> node vary in every
+network setup, vendor choice, hardware/software combination.  This is
+clearly the realm of network management software and not
+Slony-I.</P
+><P
+>Let Slony-I do what it does best: provide database replication.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN53"
+>1.5. Current Limitations</A
+></H2
+><P
+>Slony-I does not automatically propagate schema changes, nor
+does it have any ability to replicate large objects.  There is a
+single common reason for these limitations, namely that Slony-I
+operates using triggers, and neither schema changes nor large object
+operations can raise triggers suitable to tell Slony-I when those
+kinds of changes take place.</P
+><P
+>There is a capability for Slony-I to propagate DDL changes if
+you submit them as scripts via the <B
+CLASS="APPLICATION"
+>slonik</B
+>
+<TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+> operation.  That is not
+<SPAN
+CLASS="QUOTE"
+>"automatic;"</SPAN
+> you have to construct an SQL DDL script and submit
+it.</P
+><P
+>If you have those sorts of requirements, it may be worth
+exploring the use of <B
+CLASS="APPLICATION"
+>PostgreSQL</B
+> 8.0 PITR (Point In Time
+Recovery), where <ACRONYM
+CLASS="ACRONYM"
+>WAL</ACRONYM
+> logs are replicated to remote
+nodes.  Unfortunately, that has two attendant limitations:
+
+<P
+></P
+><UL
+><LI
+><P
+> PITR replicates <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> changes in
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> databases; you cannot exclude data that isn't
+relevant;</P
+></LI
+><LI
+><P
+> A PITR replica remains dormant until you apply logs
+and start up the database.  You cannot use the database and apply
+updates simultaneously.  It is like having a <SPAN
+CLASS="QUOTE"
+>"standby
+server"</SPAN
+> which cannot be used without it ceasing to be
+<SPAN
+CLASS="QUOTE"
+>"standby."</SPAN
+></P
+></LI
+></UL
+></P
+><P
+>There are a number of distinct models for database replication;
+it is impossible for one replication system to be all things to all
+people.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="SLONYLISTENERCOSTS"
+>1.6. Slony-I Communications
+Costs</A
+></H2
+><P
+>The cost of communications grows in a quadratic fashion in
+several directions as the number of replication nodes in a cluster
+increases.  Note the following relationships:
+
+<P
+></P
+><UL
+><LI
+><P
+> It is necessary to have a sl_path entry allowing
+connection from each node to every other node.  Most will normally not
+need to be used for a given replication configuration, but this means
+that there needs to be n(n-1) paths.  It is probable that there will
+be considerable repetition of entries, since the path to "node n" is
+likely to be the same from everywherein the network.</P
+></LI
+><LI
+><P
+> It is similarly necessary to have a sl_listen entry
+indicating how data flows from every node to every other node.  This
+again requires configuring n(n-1) "listener paths."</P
+></LI
+><LI
+><P
+> Each SYNC applied needs to be reported back to all of
+the other nodes participating in the set so that the nodes all know
+that it is safe to purge sl_log_1 and sl_log_2 data, as any
+<SPAN
+CLASS="QUOTE"
+>"forwarding"</SPAN
+> node could potentially take over as <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+>
+at any time.  One might expect SYNC messages to need to travel through
+n/2 nodes to get propagated to their destinations; this means that
+each SYNC is expected to get transmitted n(n/2) times.  Again, this
+points to a quadratic growth in communications costs as the number of
+nodes increases.</P
+></LI
+></UL
+></P
+><P
+>This points to it being a bad idea to have the large
+communications network resulting from the number of nodes being large.
+Up to a half dozen nodes seems pretty reasonable; every time the
+number of nodes doubles, this can be expected to quadruple
+communications overheads.</P
+></DIV
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="requirements.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I 1.1 Administration</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+>&nbsp;</TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Requirements</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -1,84 +1,90 @@
-<sect1 id="maintenance"> <title/Slony-I Maintenance/
+<sect1 id="maintenance"> <title>Slony-I Maintenance</title>
 
-<para>Slony-I actually does most of its necessary maintenance itself, in a "cleanup" thread:
+<para><productname/Slony-I/ actually does most of its necessary
+maintenance itself, in a <quote>cleanup</quote> thread:
 
 <itemizedlist>
 
-<Listitem><para> Deletes old data from various tables in the Slony-I
-cluster's namespace, notably entries in sl_log_1, sl_log_2 (not yet
-used), and sl_seqlog.
-
-<listitem><Para> Vacuum certain tables used by Slony-I.  As of 1.0.5,
-this includes pg_listener; in earlier versions, you must vacuum that
-table heavily, otherwise you'll find replication slowing down because
-Slony-I raises plenty of events, which leads to that table having
-plenty of dead tuples.
+<listitem><para> Deletes old data from various tables in the
+<productname/Slony-I/ cluster's namespace, notably entries in
+sl_log_1, sl_log_2 (not yet used), and sl_seqlog.
+
+<listitem><para> Vacuum certain tables used by <productname/Slony-I/.
+As of 1.0.5, this includes pg_listener; in earlier versions, you must
+vacuum that table heavily, otherwise you'll find replication slowing
+down because <productname/Slony-I/ raises plenty of events, which
+leads to that table having plenty of dead tuples.
 
 <para> In some versions (1.1, for sure; possibly 1.0.5) there is the
 option of not bothering to vacuum any of these tables if you are using
-something like pg_autovacuum to handle vacuuming of these tables.
-Unfortunately, it has been quite possible for pg_autovacuum to not
-vacuum quite frequently enough, so you probably want to use the
-internal vacuums.  Vacuuming pg_listener "too often" isn't nearly as
-hazardous as not vacuuming it frequently enough.
+something like <application/pg_autovacuum/ to handle vacuuming of
+these tables.  Unfortunately, it has been quite possible for
+<application/pg_autovacuum/ to not vacuum quite frequently enough, so
+you probably want to use the internal vacuums.  Vacuuming pg_listener
+"too often" isn't nearly as hazardous as not vacuuming it frequently
+enough.
 
 <para>Unfortunately, if you have long-running transactions, vacuums
 cannot clear out dead tuples that are newer than the eldest
 transaction that is still running.  This will most notably lead to
-pg_listener growing large and will slow replication.
+pg_listener growing large and will slow replication.</para>
 
 </itemizedlist>
 
-<sect2><title/ Watchdogs: Keeping Slons Running/
+<sect2><title> Watchdogs: Keeping Slons Running</title>
 
-<para>There are a couple of "watchdog" scripts available that monitor
-things, and restart the slon processes should they happen to die for
-some reason, such as a network "glitch" that causes loss of
-connectivity.
+<para>There are a couple of <quote/watchdog/ scripts available that
+monitor things, and restart the <application/slon/ processes should
+they happen to die for some reason, such as a network <quote/glitch/
+that causes loss of connectivity.
 
 <para>You might want to run them...
 
-<sect2><title/Alternative to Watchdog: generate_syncs.sh/
+<sect2><title>Alternative to Watchdog: generate_syncs.sh</title>
 
-<para>A new script for Slony-I 1.1 is "generate_syncs.sh", which
-addresses the following kind of situation.
-
-<para>Supposing you have some possibly-flakey slon daemon that might
-not run all the time, you might return from a weekend away only to
-discover the following situation...
-
-<para>On Friday night, something went "bump" and while the database
-came back up, none of the slon daemons survived.  Your online
-application then saw nearly three days worth of heavy transactions.
+<para>A new script for <productname/Slony-I/ 1.1 is
+<application/generate_syncs.sh/, which addresses the following kind of
+situation.
+
+<para>Supposing you have some possibly-flakey server where the <application/slon/
+daemon that might not run all the time, you might return from a
+weekend away only to discover the following situation...
+
+<para>On Friday night, something went <quote/bump/ and while the
+database came back up, none of the <application/slon/ daemons
+survived.  Your online application then saw nearly three days worth of
+reasonably heavy transaction load.
 
 <para>When you restart slon on Monday, it hasn't done a SYNC on the
-master since Friday, so that the next "SYNC set" comprises all of the
-updates between Friday and Monday.  Yuck.
+master since Friday, so that the next <quote/SYNC set/ comprises all
+of the updates between Friday and Monday.  Yuck.
 
-<para>If you run generate_syncs.sh as a cron job every 20 minutes, it
-will force in a periodic SYNC on the "master" server, which means that
+<para>If you run <application/generate_syncs.sh/ as a cron job every 20 minutes, it
+will force in a periodic SYNC on the origin, which means that
 between Friday and Monday, the numerous updates are split into more
 than 100 syncs, which can be applied incrementally, making the cleanup
 a lot less unpleasant.
 
-<para>Note that if SYNCs <emphasis/are/ running regularly, this script
-won't bother doing anything.
+<para>Note that if SYNCs <emphasis>are</emphasis> running regularly,
+this script won't bother doing anything.
 
-<sect2><title/ Log Files/
+<sect2><title> Log Files</title>
 
-<para>Slon daemons generate some more-or-less verbose log files,
-depending on what debugging level is turned on.  You might assortedly
-wish to:
+<para><link linkend="slon"> <application/slon/ </link> daemons
+generate some more-or-less verbose log files, depending on what
+debugging level is turned on.  You might assortedly wish to:
 
 <itemizedlist>
 
-<listitem><Para> Use a log rotator like Apache rotatelogs to have a
-sequence of log files so that no one of them gets too big;
+<listitem><para> Use a log rotator like <productname/Apache/
+<application/rotatelogs/ to have a sequence of log files so that no
+one of them gets too big;
 
-<listitem><Para> Purge out old log files, periodically.
+<listitem><para> Purge out old log files, periodically.</para>
 
 </itemizedlist>
-
+</sect2>
+</sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.4
retrieving revision 1.5
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.4 -r1.5
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -1,13 +1,14 @@
 <sect1 id="firstdb"><title/Replicating Your First Database/
 
-<para>In this example, we will be replicating a brand new pgbench database.  The
-mechanics of replicating an existing database are covered here, however we
-recommend that you learn how Slony-I functions by using a fresh new
-non-production database.
-
-<para>The Slony-I replication engine is trigger-based, allowing us to
-replicate databases (or portions thereof) running under the same
-postmaster.
+<para>In this example, we will be replicating a brand new pgbench
+database.  The mechanics of replicating an existing database are
+covered here, however we recommend that you learn how
+<productname>Slony-I</productname> functions by using a fresh new
+non-production database.</para>
+
+<para>The <productname>Slony-I</productname> replication engine is
+trigger-based, allowing us to replicate databases (or portions
+thereof) running under the same postmaster.</para>
 
 <para>This example will show how to replicate the pgbench database
 running on localhost (master) to the pgbench slave database also
@@ -15,12 +16,17 @@
 your PostgreSQL configuration:
 
 <itemizedlist>
-	<listitem><para> You have <option/tcpip_socket=true/ in your <filename/postgresql.conf/ and
-	<listitem><para> You have enabled access in your cluster(s) via <filename/pg_hba.conf/
-</itemizedlist>
 
-<para> The <envar/REPLICATIONUSER/ needs to be a PostgreSQL superuser.  This is typically
-postgres or pgsql.
+<listitem><para> You have <option>tcpip_socket=true</option> in your
+<filename>postgresql.conf</filename> and</para></listitem>
+
+<listitem><para> You have enabled access in your cluster(s) via
+<filename>pg_hba.conf</filename></para></listitem>
+
+</itemizedlist></para>
+
+<para> The <envar>REPLICATIONUSER</envar> needs to be a PostgreSQL superuser.
+This is typically postgres or pgsql.</para>
 
 <para>You should also set the following shell variables:
 
@@ -66,7 +72,7 @@
 pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
 </programlisting>
 
-<para>Because Slony-I depends on the databases having the pl/pgSQL procedural
+<para>Because <productname/Slony-I/ depends on the databases having the pl/pgSQL procedural
 language installed, we better install it now.  It is possible that you have
 installed pl/pgSQL into the template1 database in which case you can skip this
 step because it's already installed into the $MASTERDBNAME.
@@ -75,7 +81,7 @@
 createlang plpgsql -h $MASTERHOST $MASTERDBNAME
 </programlisting>
 
-<para>Slony-I does not yet automatically copy table definitions from a
+<para><productname/Slony-I/ does not yet automatically copy table definitions from a
 master when a slave subscribes to it, so we need to import this data.
 We do this with <application/pg_dump/.
 
@@ -83,12 +89,12 @@
 pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME
 </programlisting>
 
-<para>To illustrate how Slony-I allows for on the fly replication
-subscription, let's start up <application/pgbench/.  If you run the
-<application/pgbench/ application in the foreground of a separate
-terminal window, you can stop and restart it with different parameters
-at any time.  You'll need to re-export the variables again so they are
-available in this session as well.
+<para>To illustrate how <productname/Slony-I/ allows for on the fly
+replication subscription, let's start up <application/pgbench/.  If
+you run the <application/pgbench/ application in the foreground of a
+separate terminal window, you can stop and restart it with different
+parameters at any time.  You'll need to re-export the variables again
+so they are available in this session as well.
 
 <para>The typical command to run <application/pgbench/ would look like:
 
@@ -97,17 +103,17 @@
 </programlisting>
 
 <para>This will run <application/pgbench/ with 5 concurrent clients
-each processing 1000 transactions against the pgbench database running
-on localhost as the pgbench user.
+each processing 1000 transactions against the <application/pgbench/
+database running on localhost as the pgbench user.
 
 <sect2><title/ Configuring the Database for Replication./
 
 <para>Creating the configuration tables, stored procedures, triggers
-and configuration is all done through the slonik tool.  It is a
-specialized scripting aid that mostly calls stored procedures in the
-master/slave (node) databases.  The script to create the initial
-configuration for the simple master-slave setup of our pgbench
-database looks like this:
+and configuration is all done through the <link linkend="slonik">
+<application/slonik/ </link> tool.  It is a specialized scripting aid
+that mostly calls stored procedures in the master/slave (node)
+databases.  The script to create the initial configuration for the
+simple master-slave setup of our pgbench database looks like this:
 
 <programlisting>
 #!/bin/sh
@@ -170,11 +176,13 @@
 _EOF_
 </programlisting>
 
-<para>Is the pgbench still running?  If not start it again.
+<para>Is the <application/pgbench/ still running?  If not start it
+again.
 
 <para>At this point we have 2 databases that are fully prepared.  One
-is the master database in which bgbench is busy accessing and changing
-rows.  It's now time to start the replication daemons.
+is the master database in which <application/pgbench/ is busy
+accessing and changing rows.  It's now time to start the replication
+daemons.
 
 <para>On $MASTERHOST the command to start the replication engine is
 
@@ -188,11 +196,13 @@
 slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"
 </programlisting>
 
-<para>Even though we have the <application/slon/ running on both the
-master and slave, and they are both spitting out diagnostics and other
-messages, we aren't replicating any data yet.  The notices you are
-seeing is the synchronization of cluster configurations between the 2
-<application/slon/ processes.
+<para>Even though we have the <application><link linkend="slon"> slon
+</link></application> running on both the master and slave, and they
+are both spitting out diagnostics and other messages, we aren't
+replicating any data yet.  The notices you are seeing is the
+synchronization of cluster configurations between the 2
+<application><link linkend="slon"> slon </link></application>
+processes.
 
 <para>To start replicating the 4 pgbench tables (set 1) from the
 master (node id 1) the the slave (node id 2), execute the following
@@ -226,23 +236,23 @@
 to copy the current content of all 4 replicated tables.  While doing
 so, of course, the pgbench application will continue to modify the
 database.  When the copy process is finished, the replication daemon
-on <envar/$SLAVEHOST/ will start to catch up by applying the
+on <envar>$SLAVEHOST</envar> will start to catch up by applying the
 accumulated replication log.  It will do this in little steps, 10
 seconds worth of application work at a time.  Depending on the
 performance of the two systems involved, the sizing of the two
 databases, the actual transaction load and how well the two databases
 are tuned and maintained, this catchup process can be a matter of
-minutes, hours, or eons.
+minutes, hours, or eons.</para>
 
 <para>You have now successfully set up your first basic master/slave
 replication system, and the 2 databases should, once the slave has
 caught up, contain identical data.  That's the theory, at least.  In
 practice, it's good to build confidence by verifying that the datasets
-are in fact the same.
+are in fact the same.</para>
 
-<para>The following script will create ordered dumps of the 2 databases and compare
-them.  Make sure that pgbench has completed it's testing, and that your slon
-sessions have caught up.
+<para>The following script will create ordered dumps of the 2
+databases and compare them.  Make sure that <application/pgbench/ has
+completed its testing, and that your slon sessions have caught up.
 
 <programlisting>
 #!/bin/sh
@@ -277,15 +287,18 @@
 else
 	 echo "FAILED - see $CLUSTERNAME.diff for database differences"
 fi
-</programlisting>
+</programlisting></para>
 
 <para>Note that there is somewhat more sophisticated documentation of
-the process in the Slony-I source code tree in a file called
-slony-I-basic-mstr-slv.txt.
-
-<para>If this script returns "FAILED" please contact the developers at
-<ulink url="http://slony.org/"> http://slony.org/</ulink>
+the process in the <productname>Slony-I</productname> source code tree
+in a file called
+<filename>slony-I-basic-mstr-slv.txt</filename>.</para>
+
+<para>If this script returns <command>FAILED</command> please contact the
+developers at <ulink url="http://slony.info/">
+http://slony.info/</ulink></para>
 
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: help.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/help.html -Ldoc/adminguide/help.html -u -w -r1.2 -r1.3
--- doc/adminguide/help.html
+++ doc/adminguide/help.html
@@ -82,7 +82,8 @@
 >13. More Slony-I Help</A
 ></H1
 ><P
->If you are having problems with Slony-I, you have several options for help:
+>If you are having problems with Slony-I, you have several
+options for help:
 
 <P
 ></P
@@ -94,7 +95,7 @@
 TARGET="_top"
 >http://slony.info/</A
 > - the official
-"home" of Slony&#13;</P
+"home" of Slony</P
 ></LI
 ><LI
 ><P
@@ -103,7 +104,7 @@
 HREF="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx"
 TARGET="_top"
 >Howto</A
->&#13;</P
+></P
 ></LI
 ><LI
 ><P
@@ -112,14 +113,14 @@
 HREF="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication"
 TARGET="_top"
 >Varlena GeneralBits </A
-> that may be helpful.&#13;</P
+> that may be helpful.</P
 ></LI
 ><LI
 ><P
 > IRC - There are usually some people on #slony on
 irc.freenode.net who may be able to answer some of your
 questions. There is also a bot named "rtfm_please" that you may want
-to chat with.&#13;</P
+to chat with.</P
 ></LI
 ><LI
 ><P
@@ -130,7 +131,7 @@
 HREF="http://gborg.postgresql.org/mailman/listinfo/slony1"
 TARGET="_top"
 >here. </A
->&#13;</P
+></P
 ></LI
 ><LI
 ><P
@@ -148,7 +149,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN1017"
+NAME="AEN1239"
 >13.1. Other Information Sources</A
 ></H2
 ><P
--- /dev/null
+++ doc/adminguide/app-slon.html
@@ -0,0 +1,481 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>slon</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+TITLE="Slony-I Commands"
+HREF="slony-commands.html"><LINK
+REL="PREVIOUS"
+TITLE="Slony-I Commands"
+HREF="slony-commands.html"><LINK
+REL="NEXT"
+TITLE="slonik"
+HREF="app-slonik.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="REFENTRY"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="slony-commands.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="app-slonik.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><H1
+><A
+NAME="APP-SLON"
+></A
+><B
+CLASS="APPLICATION"
+>slon</B
+></H1
+><DIV
+CLASS="REFNAMEDIV"
+><A
+NAME="AEN343"
+></A
+><H2
+>Name</H2
+><B
+CLASS="APPLICATION"
+>slon</B
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> daemon
+    </DIV
+><DIV
+CLASS="REFSYNOPSISDIV"
+><A
+NAME="AEN350"
+></A
+><H2
+>Synopsis</H2
+><P
+><TT
+CLASS="COMMAND"
+>slon</TT
+> [<TT
+CLASS="REPLACEABLE"
+><I
+>option</I
+></TT
+>...] [<TT
+CLASS="REPLACEABLE"
+><I
+>clustername</I
+></TT
+>
+    [<TT
+CLASS="REPLACEABLE"
+><I
+>conninfo</I
+></TT
+>]]</P
+></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="AEN359"
+></A
+><H2
+>Description</H2
+><P
+>     <B
+CLASS="APPLICATION"
+>slon</B
+> is the daemon application that
+     <SPAN
+CLASS="QUOTE"
+>"runs"</SPAN
+> <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+     replication.  A <B
+CLASS="APPLICATION"
+>slon</B
+> instance must be
+     run for each node in a <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+     cluster.
+    </P
+></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="R1-APP-SLON-3"
+></A
+><H2
+>Options</H2
+><P
+></P
+><DIV
+CLASS="VARIABLELIST"
+><DL
+><DT
+><CODE
+CLASS="OPTION"
+>-d <TT
+CLASS="REPLACEABLE"
+><I
+>debuglevel</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      Specifies the level of verbosity that <B
+CLASS="APPLICATION"
+>slon</B
+> should
+      use when logging its activity.
+      </P
+><P
+>The eight levels of logging are:
+      <P
+></P
+><UL
+><LI
+><P
+>Error
+       </P
+></LI
+><LI
+><P
+>Warn
+       </P
+></LI
+><LI
+><P
+>Config
+       </P
+></LI
+><LI
+><P
+>Info
+       </P
+></LI
+><LI
+><P
+>Debug1
+       </P
+></LI
+><LI
+><P
+>Debug2
+       </P
+></LI
+><LI
+><P
+>Debug3
+       </P
+></LI
+><LI
+><P
+>Debug4
+      </P
+></LI
+></UL
+>
+    </P
+></DD
+><DT
+><CODE
+CLASS="OPTION"
+>-s <TT
+CLASS="REPLACEABLE"
+><I
+>SYNC check interval</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      Specifies the interval, in milliseconds, in which
+      <B
+CLASS="APPLICATION"
+>slon</B
+> should add a SYNC even if none has been
+      mandated by data creation.  Default is 10000 ms.
+     </P
+><P
+>Short sync times keep the master on a <SPAN
+CLASS="QUOTE"
+>"short leash,"</SPAN
+>
+      updating the slaves more frequently.  If you have replicated
+      sequences that are frequently updated <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>without</I
+></SPAN
+> there
+      being tables that are affected, this keeps there from being times
+      when only sequences are updated, and therefore <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>no</I
+></SPAN
+>
+      syncs take place.
+    </P
+></DD
+><DT
+><CODE
+CLASS="OPTION"
+>-t <TT
+CLASS="REPLACEABLE"
+><I
+>SYNC interval timeout</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      Default is 60000 ms.
+      </P
+></DD
+><DT
+><CODE
+CLASS="OPTION"
+>-g <TT
+CLASS="REPLACEABLE"
+><I
+>group size</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      Maximum SYNC group size; defaults to 6.  Thus, if a particular
+      node is behind by 200 SYNCs, it will try to group them together
+      into groups of 6.  This would be expected to reduce transaction
+      overhead due to having fewer transactions to <TT
+CLASS="COMMAND"
+>COMMIT</TT
+>.
+     </P
+><P
+>The default of 6 is probably suitable for small systems
+      that can devote only very limited bits of memory to slon.  If you
+      have plenty of memory, it would be reasonable to increase this,
+      as it will increase the amount of work done in each transaction,
+      and will allow a subscriber that is behind by a lot to catch up
+      more quickly.</P
+><P
+>Slon processes usually stay pretty small; even with large
+      value for this option, slon would be expected to only grow to a
+      few MB in size.</P
+></DD
+><DT
+><CODE
+CLASS="OPTION"
+>-c <TT
+CLASS="REPLACEABLE"
+><I
+>cleanup cycles</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      How often to <TT
+CLASS="COMMAND"
+>VACUUM</TT
+> in cleanup cycles.
+     </P
+><P
+>Set this to zero to disable slon-initiated vacuuming.  If
+      you are using something like
+      <B
+CLASS="APPLICATION"
+>pg_autovacuum</B
+> to initiate vacuums, you
+      may not need for slon to initiate vacuums itself.  If you are
+      not, there are some tables Slony-I uses that collect a
+      <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>lot</I
+></SPAN
+> of dead tuples that should be vacuumed
+      frequently.</P
+></DD
+><DT
+><CODE
+CLASS="OPTION"
+>-p <TT
+CLASS="REPLACEABLE"
+><I
+>PID filename</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      PID filename.
+      </P
+></DD
+><DT
+><CODE
+CLASS="OPTION"
+>-f <TT
+CLASS="REPLACEABLE"
+><I
+>config file</I
+></TT
+></CODE
+></DT
+><DD
+><P
+>      File containing <B
+CLASS="APPLICATION"
+>slon</B
+> configuration.
+      </P
+></DD
+></DL
+></DIV
+></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="AEN444"
+></A
+><H2
+>Exit Status</H2
+><P
+>   <B
+CLASS="APPLICATION"
+>slon</B
+> returns 0 to the shell if it
+   finished normally.  It returns -1 if it encounters any fatal error.
+  </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="slony-commands.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="app-slonik.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Slony-I Commands</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony-commands.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><B
+CLASS="APPLICATION"
+>slonik</B
+></TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: concepts.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/concepts.html -Ldoc/adminguide/concepts.html -u -w -r1.2 -r1.3
--- doc/adminguide/concepts.html
+++ doc/adminguide/concepts.html
@@ -82,7 +82,8 @@
 >4. Slony-I Concepts</A
 ></H1
 ><P
->In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
+>In order to set up a set of Slony-I replicas, it is necessary to
+understand the following major abstractions that it uses:
 
 <P
 ></P
@@ -104,7 +105,7 @@
 ></LI
 ><LI
 ><P
-> Provider and Subscriber</P
+> Origin, Providers and Subscribers</P
 ></LI
 ></UL
 >&#13;</P
@@ -113,11 +114,12 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN212"
+NAME="AEN241"
 >4.1. Cluster</A
 ></H2
 ><P
->In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases.&#13;</P
+>In Slony-I terms, a Cluster is a named set of PostgreSQL
+database instances; replication takes place between those databases.&#13;</P
 ><P
 >The cluster name is specified in each and every Slonik script via the directive:
 <TABLE
@@ -149,7 +151,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN220"
+NAME="AEN249"
 >4.2. Node</A
 ></H2
 ><P
@@ -170,10 +172,12 @@
 ></TABLE
 >&#13;</P
 ><P
->The CONNINFO information indicates a string argument that will ultimately be passed to the <CODE
+>The CONNINFO information indicates a string argument that will
+ultimately be passed to the <CODE
 CLASS="FUNCTION"
 >PQconnectdb()</CODE
-> libpq function. &#13;</P
+> libpq
+function.&#13;</P
 ><P
 >Thus, a Slony-I cluster consists of:
 <P
@@ -196,7 +200,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN233"
+NAME="AEN262"
 >4.3. Replication Set</A
 ></H2
 ><P
@@ -214,39 +218,39 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN238"
->4.4. Provider and Subscriber</A
+NAME="AEN267"
+>4.4. Origin, Providers and Subscribers</A
 ></H2
 ><P
->Each replication set has some <SPAN
-CLASS="QUOTE"
->"master"</SPAN
-> node, which
-winds up being the <SPAN
+>Each replication set has some origin node, which is the
+<SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >only</I
 ></SPAN
-> place where user
-applications are permitted to modify data in the tables that are being
-replicated.  That <SPAN
-CLASS="QUOTE"
->"master"</SPAN
-> may be considered the
-originating <SPAN
+> place where user applications are permitted
+to modify data in the tables that are being replicated.  This might
+also be termed the <SPAN
 CLASS="QUOTE"
->"provider node;"</SPAN
-> it is the main place from
+>"master provider"</SPAN
+>; it is the main place from
 which data is provided.&#13;</P
 ><P
 >Other nodes in the cluster will subscribe to the replication
 set, indicating that they want to receive the data.&#13;</P
 ><P
->The "master" node will never be considered a "subscriber."  But
-Slony-I supports the notion of cascaded subscriptions, that is, a node
-that is subscribed to the "master" may also behave as a "provider" to
-other nodes in the cluster.&#13;</P
+>The origin node will never be considered a <SPAN
+CLASS="QUOTE"
+>"subscriber."</SPAN
+>
+(Ignoring the case where the cluster is reshaped, and the origin is
+moved to another node.)  But Slony-I supports the notion of cascaded
+subscriptions, that is, a node that is subscribed to the origin may
+also behave as a <SPAN
+CLASS="QUOTE"
+>"provider"</SPAN
+> to other nodes in the cluster.&#13;</P
 ></DIV
 ></DIV
 ><DIV
Index: prerequisites.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/prerequisites.sgml
+++ doc/adminguide/prerequisites.sgml
@@ -1,7 +1,7 @@
 <sect1 id="requirements"><title/ Requirements/
 
 <para>Any platform that can run PostgreSQL should be able to run
-Slony-I.
+<productname/Slony-I/.
 
 <para>The platforms that have received specific testing at the time of
 this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha,
@@ -9,140 +9,155 @@
 <trademark/Solaris/-2.8-SPARC, <trademark/Solaris/-2.9-SPARC, AIX 5.1
 and OpenBSD-3.5-sparc64.
 
-<para>There have been reports of success at running Slony-I hosts that
-are running PostgreSQL on Microsoft <trademark/Windows/.  At this
-time, the <quote/binary/ applications (<emphasis/e.g./ -
-<application/slonik/, <application/slon/) do not run on
-<trademark/Windows/, but a <application/slon/ running on one of the
+<para>There have been reports of success at running
+<productname>Slony-I</productname> hosts that are running PostgreSQL
+on Microsoft <trademark>Windows</trademark>.  At this time, the
+<quote>binary</quote> applications (<emphasis>e.g.</emphasis> -
+<application><link linkend="slonik"> slonik </link></application>,
+<application><link linkend="slon"> slon </link></application>) do not
+run on <trademark>Windows</trademark>, but a <application><link
+linkend="slon"> slon </link></application> running on one of the
 Unix-like systems has no reason to have difficulty connect to a
-PostgreSQL instance running on <trademark/Windows/.
+PostgreSQL instance running on <trademark>Windows</trademark>.
 
-<para> It ought to be possible to port <application>slon</application>
-and <application>slonik</application> to run on
+<para> It ought to be possible to port <application><link
+linkend="slon"> slon </link></application> and <application><link
+linkend="slonik"> slonik </link></application> to run on
 <trademark>Windows</trademark>; the conspicuous challenge is of having
 a POSIX-like <filename>pthreads</filename> implementation for
-<application>slon</application>, as it uses that to have multiple
-threads of execution.  There are reports of there being a
-<filename>pthreads</filename> library for
+<application><link linkend="slon"> slon </link></application>, as it
+uses that to have multiple threads of execution.  There are reports of
+there being a <filename>pthreads</filename> library for
 <trademark>Windows</trademark>, so nothing should prevent some
 interested party from volunteering to do the port.</para>
 
-<sect2><title/ Software needed/
+<sect2><title> Software needed</title>
 <para>
 <itemizedlist>
 	
-<listitem><Para> GNU make.  Other make programs will not work.  GNU
-make is often installed under the name <command/gmake/; this document
-will therefore always refer to it by that name. (On Linux-based
-systems GNU make is typically the default make, and is called
-<command/make/) To test to see if your make is GNU make enter
-<command/make version/.  Version 3.76 or later will suffice; previous
-versions may not.
-
-<listitem><Para> You need an ISO/ANSI C compiler.  Recent versions of
-<application/GCC/ work.
-
-<listitem><Para> You also need a recent version of PostgreSQL
-<emphasis/source/.  Slony-I depends on namespace support so you must
-have version 7.3.3 or newer to be able to build and use Slony-I.  Rod
-Taylor has <quote/hacked up/ a version of Slony-I that works with
-version 7.2; if you desperately need that, look for him on the <ulink
-url="http://www.postgresql.org/lists.html"> PostgreSQL Hackers mailing
-list</ulink>.  It is not anticipated that 7.2 will be supported by any
-official <application/Slony-I/ release.
+<listitem><para> GNU make.  Other make programs will not work.  GNU
+make is often installed under the name <command>gmake</command>; this
+document will therefore always refer to it by that name. (On
+Linux-based systems GNU make is typically the default make, and is
+called <command>make</command>) To test to see if your make is GNU
+make enter <command>make version</command>.  Version 3.76 or later
+will suffice; previous versions may not.
+
+<listitem><para> You need an ISO/ANSI C compiler.  Recent versions of
+<application>GCC</application> work.
+
+<listitem><para> You also need a recent version of PostgreSQL
+<emphasis>source</emphasis>.  <productname>Slony-I</productname>
+depends on namespace support so you must have version 7.3.3 or newer
+to be able to build and use <productname>Slony-I</productname>.  Rod
+Taylor has <quote>hacked up</quote> a version of
+<productname>Slony-I</productname> that works with version 7.2; if you
+desperately need that, look for him on the PostgreSQL Hackers mailing
+list.  It is not anticipated that 7.2 will be supported by any
+official <application><productname>Slony-I</productname></application>
+release.
 
-<listitem><Para> GNU packages may be included in the standard
+<listitem><para> GNU packages may be included in the standard
 packaging for your operating system, or you may need to look for
 source code at your local GNU mirror (see <ulink
 url="http://www.gnu.org/order/ftp.html">
 http://www.gnu.org/order/ftp.html</ulink> for a list) or at <ulink
 url="ftp://ftp.gnu.org/gnu"> ftp://ftp.gnu.org/gnu</ulink> .)
 
-<listitem><Para> If you need to obtain PostgreSQL source, you can
-download it from your favorite PostgreSQL mirror (see <ulink
+<listitem><para> If you need to obtain PostgreSQL source, you can
+download it from your favorite PostgreSQL mirror.  See <ulink
 url="http://www.postgresql.org/mirrors-www.html">
-http://www.postgresql.org/mirrors-www.html </ulink> for a list), or
-via <ulink url="http://bt.postgresql.org/"> BitTorrent</ulink>.
+http://www.postgresql.org/mirrors-www.html </ulink> for a list.
 </itemizedlist>
 
 <para>Also check to make sure you have sufficient disk space.  You
 will need approximately 5MB for the source tree during build and
 installation.
 
-<sect2><title/ Getting Slony-I Source/
+<sect2><title> Getting <productname>Slony-I</productname>
+Source</title>
 
-<para>You can get the Slony-I source from <ulink
-url="http://developer.postgresql.org/~wieck/slony1/download/">
+<para>You can get the <productname>Slony-I</productname> source from
+<ulink url="http://developer.postgresql.org/~wieck/slony1/download/">
 http://developer.postgresql.org/~wieck/slony1/download/</ulink>
 </para>
 
 </sect2>
 
-<sect2><title/ Time Synchronization/
+<sect2><title> Time Synchronization</title>
 
 <para> All the servers used within the replication cluster need to
 have their Real Time Clocks in sync. This is to ensure that slon
 doesn't error with messages indicating that slave is already ahead of
 the master during replication.  We recommend you have ntpd running on
-all nodes, with subscriber nodes using the <quote/master/ provider
-node as their time server.
+all nodes, with subscriber nodes using the <quote>master</quote>
+provider node as their time server.
 
-<para> It is possible for Slony-I to function even in the face of
-there being some time discrepancies, but having systems <quote/in
-sync/ is usually pretty important for distributed applications.
+<para> It is possible for <productname>Slony-I</productname> to
+function even in the face of there being some time discrepancies, but
+having systems <quote>in sync</quote> is usually pretty important for
+distributed applications.
 
-<Para> See <ulink url="http://www.ntp.org/"> www.ntp.org </ulink> for
+<para> See <ulink url="http://www.ntp.org/"> www.ntp.org </ulink> for
 more details about NTP (Network Time Protocol).
 
-<sect2><title/ Network Connectivity/
+</sect2>
+
+<sect2><title> Network Connectivity</title>
 
 <para>It is necessary that the hosts that are to replicate between one
-another have <emphasis/bidirectional/ network communications to the
-PostgreSQL instances.  That is, if node B is replicating data from
-node A, it is necessary that there be a path from A to B and from B to
-A.  It is recommended that all nodes in a Slony-I cluster allow this
-sort of bidirection communications from any node in the cluster to any
-other node in the cluster.
+another have <emphasis>bidirectional</emphasis> network communications
+to the PostgreSQL instances.  That is, if node B is replicating data
+from node A, it is necessary that there be a path from A to B and from
+B to A.  It is recommended that all nodes in a
+<productname>Slony-I</productname> cluster allow this sort of
+bidirection communications from any node in the cluster to any other
+node in the cluster.
 
 <para>Note that the network addresses need to be consistent across all
-of the nodes.  Thus, if there is any need to use a <quote/public/
-address for a node, to allow remote/VPN access, that <quote/public/
-address needs to be able to be used consistently throughout the
-Slony-I cluster, as the address is propagated throughout the cluster
-in table <envar/sl_path/.
+of the nodes.  Thus, if there is any need to use a
+<quote>public</quote> address for a node, to allow remote/VPN access,
+that <quote>public</quote> address needs to be able to be used
+consistently throughout the <productname>Slony-I</productname>
+cluster, as the address is propagated throughout the cluster in table
+<envar>sl_path</envar>.
 
 <para>A possible workaround for this, in environments where firewall
-rules are particularly difficult to implement, may be to establish
-SSH Tunnels that are created on each host that allow remote access
-through IP address 127.0.0.1, with a different port for each
-destination.
-
-<para> Note that <application/slonik/ and the <application/slon/
-instances need no special connections or protocols to communicate with
-one another; they just need to be able to get access to the
-<application/PostgreSQL/ databases, connecting as a <quote/superuser/.
+rules are particularly difficult to implement, may be to establish SSH
+Tunnels that are created on each host that allow remote access through
+IP address 127.0.0.1, with a different port for each destination.
+
+<para> Note that <application>slonik</application> and the
+<application>slon</application> instances need no special connections
+or protocols to communicate with one another; they just need to be
+able to get access to the <application>PostgreSQL</application>
+databases, connecting as a <quote>superuser</quote>.
 
 <para> An implication of the communications model is that the entire
-extended network in which a Slony-I cluster operates must be able to
-be treated as being secure.  If there is a remote location where you
-cannot trust the Slony-I node to be considered <quote/secured,/ this
-represents a vulnerability that adversely the security of the entire
-cluster.  In effect, the security policies throughout the cluster can
-only be considered as stringent as those applied at the
-<emphasis/weakest/ link.  Running a full-blown Slony-I node at a
-branch location that can't be kept secure compromises security for the
+extended network in which a <productname>Slony-I</productname> cluster
+operates must be able to be treated as being secure.  If there is a
+remote location where you cannot trust the
+<productname>Slony-I</productname> node to be considered
+<quote>secured,</quote> this represents a vulnerability that adversely
+the security of the entire cluster.  In effect, the security policies
+throughout the cluster can only be considered as stringent as those
+applied at the <emphasis>weakest</emphasis> link.  Running a
+full-blown <productname>Slony-I</productname> node at a branch
+location that can't be kept secure compromises security for the
 cluster.
 
 <para>In the future plans is a feature whereby updates for a
 particular replication set would be serialized via a scheme called
-<quote/log shipping./ The data stored in sl_log_1 and sl_log_2 would
-be written out to log files on disk.  These files could be transmitted
-in any manner desired, whether via scp, FTP, burning them onto
-DVD-ROMs and mailing them, or even by recording them on a USB
-<quote/flash device/ and attaching them to birds, allowing a sort of
-<quote/avian transmission protocol./ This will allow one way
-communications so that <quote/subscribers/ that use log shipping would
-have no need for access to other Slony-I nodes.
+<quote>log shipping.</quote> The data stored in sl_log_1 and sl_log_2
+would be written out to log files on disk.  These files could be
+transmitted in any manner desired, whether via scp, FTP, burning them
+onto DVD-ROMs and mailing them, or even by recording them on a USB
+<quote>flash device</quote> and attaching them to birds, allowing a
+sort of <quote>avian transmission protocol.</quote> This will allow
+one way communications so that <quote>subscribers</quote> that use log
+shipping would have no need for access to other
+<productname>Slony-I</productname> nodes.
+</sect2>
 </sect1>
 
 <!-- Keep this comment at the end of the file
--- /dev/null
+++ doc/adminguide/slonyadmin.html
@@ -0,0 +1,752 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+> Slony-I Administration </TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="PREVIOUS"
+TITLE="slonik"
+HREF="app-slonik.html"><LINK
+REL="NEXT"
+TITLE="Slon daemons"
+HREF="slonstart.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="ARTICLE"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="app-slonik.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="slonstart.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="ARTICLE"
+><DIV
+CLASS="TITLEPAGE"
+><H1
+CLASS="TITLE"
+><A
+NAME="SLONYADMIN"
+>Slony-I Administration</A
+></H1
+><HR></DIV
+><DIV
+CLASS="TOC"
+><DL
+><DT
+><B
+>Table of Contents</B
+></DT
+><DT
+>1. <A
+HREF="slonyadmin.html#ALTPERL"
+>Slony-I Administration Scripts</A
+></DT
+><DT
+>2. <A
+HREF="slonstart.html"
+>Slon daemons</A
+></DT
+><DT
+>3. <A
+HREF="subscribenodes.html"
+>Subscribing Nodes</A
+></DT
+><DT
+>4. <A
+HREF="monitoring.html"
+>Monitoring</A
+></DT
+><DT
+>5. <A
+HREF="maintenance.html"
+>Slony-I Maintenance</A
+></DT
+><DT
+>6. <A
+HREF="reshape.html"
+>Reshaping a Cluster</A
+></DT
+><DT
+>7. <A
+HREF="failover.html"
+>Doing switchover and failover with Slony-I</A
+></DT
+><DT
+>8. <A
+HREF="listenpaths.html"
+>Slony Listen Paths</A
+></DT
+><DT
+>9. <A
+HREF="addthings.html"
+>Adding Things to Replication</A
+></DT
+><DT
+>10. <A
+HREF="dropthings.html"
+>Dropping things from Slony Replication</A
+></DT
+><DT
+>11. <A
+HREF="ddlchanges.html"
+>Database Schema Changes (DDL)</A
+></DT
+><DT
+>12. <A
+HREF="firstdb.html"
+>Replicating Your First Database</A
+></DT
+><DT
+>13. <A
+HREF="help.html"
+>More Slony-I Help</A
+></DT
+></DL
+></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="ALTPERL"
+>1. Slony-I Administration Scripts</A
+></H1
+><P
+>In the <TT
+CLASS="FILENAME"
+>altperl</TT
+> directory in the <B
+CLASS="APPLICATION"
+>CVS</B
+>
+tree, there is a sizable set of <B
+CLASS="APPLICATION"
+>Perl</B
+> scripts that may be
+used to administer a set of <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> instances, which
+support having arbitrary numbers of nodes.&#13;</P
+><P
+>Most of them generate Slonik scripts that are then to be passed
+on to the <A
+HREF="app-slonik.html#SLONIK"
+> <B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+> utility
+to be submitted to all of the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> nodes in a
+particular cluster.  At one time, this embedded running <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> on the slonik scripts.
+Unfortunately, this turned out to be a pretty large calibre
+<SPAN
+CLASS="QUOTE"
+>"foot gun,"</SPAN
+> as minor typos on the command line led, on a couple
+of occasions, to pretty calamitous actions, so the behaviour has been
+changed so that the scripts simply submit output to standard output.
+An administrator should review the script <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>before</I
+></SPAN
+> submitting
+it to <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+>.&#13;</P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN508"
+>1.1. Node/Cluster Configuration - cluster.nodes</A
+></H2
+><P
+>The UNIX environment variable <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+> is used to
+determine what Perl configuration file will be used to control the
+shape of the nodes in a <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster.&#13;</P
+><P
+>What variables are set up...
+<P
+></P
+><UL
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$SETNAME</CODE
+>=orglogs;	# What is the name of the replication set?&#13;</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$LOGDIR</CODE
+>='/opt/OXRS/log/LOGDBS';  # What is the base directory for logs?&#13;</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$SLON_BIN_PATH</CODE
+>='/opt/dbs/pgsql74/bin';  # Where to look for slony binaries&#13;</P
+></LI
+><LI
+><P
+> <CODE
+CLASS="ENVAR"
+>$APACHE_ROTATOR</CODE
+>="/opt/twcsds004/OXRS/apache/rotatelogs";  # If set, where to find Apache log rotator</P
+></LI
+></UL
+>&#13;</P
+><P
+>You then define the set of nodes that are to be replicated using
+a set of calls to <CODE
+CLASS="FUNCTION"
+>add_node()</CODE
+>.&#13;</P
+><P
+><TT
+CLASS="COMMAND"
+>  add_node (host =&#62; '10.20.30.40', dbname =&#62; 'orglogs', port =&#62; 5437,
+			  user =&#62; 'postgres', node =&#62; 4, parent =&#62; 1);</TT
+></P
+><P
+>The set of parameters for <CODE
+CLASS="FUNCTION"
+>add_node()</CODE
+> are thus:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    my %PARAMS =   (host=&#62; undef,		# Host name
+    	   	dbname =&#62; 'template1',	# database name
+    		port =&#62; 5432,		# Port number
+    		user =&#62; 'postgres',	# user to connect as
+    		node =&#62; undef,		# node number
+    		password =&#62; undef,	# password for user
+    		parent =&#62; 1,		# which node is parent to this node
+    		noforward =&#62; undef	# shall this node be set up to forward results?
+    );</PRE
+></TD
+></TR
+></TABLE
+>
+     </P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN534"
+>1.2. Set configuration - cluster.set1, cluster.set2</A
+></H2
+><P
+>The UNIX environment variable <CODE
+CLASS="ENVAR"
+>SLONYSET</CODE
+> is used to
+determine what Perl configuration file will be used to determine what
+objects will be contained in a particular replication set.&#13;</P
+><P
+>Unlike <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+>, which is essential for
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> of the <A
+HREF="app-slonik.html#SLONIK"
+> slonik</A
+>-generating scripts, this only needs to be set when running
+<TT
+CLASS="FILENAME"
+>create_set.pl</TT
+>, as that is the only script used to
+control what tables will be in a particular replication set.</P
+><P
+>What variables are set up...
+<P
+></P
+><UL
+><LI
+><P
+> $TABLE_ID = 44;	 </P
+><P
+> Each table must be identified by a unique number; this variable controls where numbering starts</P
+></LI
+><LI
+><P
+> @PKEYEDTABLES		&#13;</P
+><P
+> An array of names of tables to be replicated that have a
+defined primary key so that Slony-I can automatically select its key&#13;</P
+></LI
+><LI
+><P
+> %KEYEDTABLES&#13;</P
+><P
+> A hash table of tables to be replicated, where the hash index
+is the table name, and the hash value is the name of a unique not null
+index suitable as a "candidate primary key."&#13;</P
+></LI
+><LI
+><P
+> @SERIALTABLES&#13;</P
+><P
+> An array of names of tables to be replicated that have no
+candidate for primary key.  Slony-I will add a key field based on a
+sequence that Slony-I generates&#13;</P
+></LI
+><LI
+><P
+> @SEQUENCES&#13;</P
+><P
+> An array of names of sequences that are to be replicated&#13;</P
+></LI
+></UL
+>&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN560"
+>1.3. build_env.pl</A
+></H2
+><P
+>Queries a database, generating output hopefully suitable for
+<TT
+CLASS="FILENAME"
+>slon.env</TT
+> consisting of:
+<P
+></P
+><UL
+><LI
+><P
+> a set of <CODE
+CLASS="FUNCTION"
+>add_node()</CODE
+> calls to configure the cluster</P
+></LI
+><LI
+><P
+> The arrays <CODE
+CLASS="ENVAR"
+>@KEYEDTABLES</CODE
+>, <CODE
+CLASS="ENVAR"
+>@SERIALTABLES</CODE
+>, and <CODE
+CLASS="ENVAR"
+>@SEQUENCES</CODE
+></P
+></LI
+></UL
+>&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN573"
+>1.4. create_set.pl</A
+></H2
+><P
+>This requires <CODE
+CLASS="ENVAR"
+>SLONYSET</CODE
+> to be set as well as
+<CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+>; it is used to generate the Slonik script to set up
+a replication set consisting of a set of tables and sequences that are
+to be replicated.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN578"
+>1.5. drop_node.pl</A
+></H2
+><P
+>Generates Slonik script to drop a node from a Slony-I cluster.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN581"
+>1.6. drop_set.pl</A
+></H2
+><P
+>Generates Slonik script to drop a replication set (<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> - set of tables and sequences) from a Slony-I cluster.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN585"
+>1.7. failover.pl</A
+></H2
+><P
+>Generates Slonik script to request failover from a dead node to some new origin&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN588"
+>1.8. init_cluster.pl</A
+></H2
+><P
+>Generates Slonik script to initialize a whole Slony-I cluster,
+including setting up the nodes, communications paths, and the listener
+routing.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN591"
+>1.9. merge_sets.pl</A
+></H2
+><P
+>Generates Slonik script to merge two replication sets together.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN594"
+>1.10. move_set.pl</A
+></H2
+><P
+>Generates Slonik script to move the origin of a particular set to a different node.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN597"
+>1.11. replication_test.pl</A
+></H2
+><P
+>Script to test whether Slony-I is successfully replicating data.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN600"
+>1.12. restart_node.pl</A
+></H2
+><P
+>Generates Slonik script to request the restart of a node.  This was
+particularly useful pre-1.0.5 when nodes could get snarled up when
+slon daemons died.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN603"
+>1.13. restart_nodes.pl</A
+></H2
+><P
+>Generates Slonik script to restart all nodes in the cluster.  Not
+particularly useful...&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN606"
+>1.14. show_configuration.pl</A
+></H2
+><P
+>Displays an overview of how the environment (e.g. - <CODE
+CLASS="ENVAR"
+>SLONYNODES</CODE
+>) is set
+to configure things.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN610"
+>1.15. slon_kill.pl</A
+></H2
+><P
+>Kills slony watchdog and all slon daemons for the specified set.  It
+only works if those processes are running on the local host, of
+course!&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN613"
+>1.16. slon_pushsql.pl</A
+></H2
+><P
+>Generates Slonik script to push DDL changes to a replication set.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN616"
+>1.17. slon_start.pl</A
+></H2
+><P
+>This starts a slon daemon for the specified cluster and node, and uses
+slon_watchdog.pl to keep it running.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN619"
+>1.18. slon_watchdog.pl</A
+></H2
+><P
+>Used by slon_start.pl...&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN622"
+>1.19. slon_watchdog2.pl</A
+></H2
+><P
+>This is a somewhat smarter watchdog; it monitors a particular Slony-I
+node, and restarts the slon process if it hasn't seen updates go in in
+20 minutes or more.&#13;</P
+><P
+>This is helpful if there is an unreliable network connection such that
+the slon sometimes stops working without becoming aware of it...&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN626"
+>1.20. subscribe_set.pl</A
+></H2
+><P
+>Generates Slonik script to subscribe a particular node to a particular replication set.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN629"
+>1.21. uninstall_nodes.pl</A
+></H2
+><P
+>This goes through and drops the Slony-I schema from each node; use
+this if you want to destroy replication throughout a cluster.  This is
+a VERY unsafe script!&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN632"
+>1.22. unsubscribe_set.pl</A
+></H2
+><P
+>Generates Slonik script to unsubscribe a node from a replication set.&#13;</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN635"
+>1.23. update_nodes.pl</A
+></H2
+><P
+>Generates Slonik script to tell all the nodes to update the Slony-I
+functions.  This will typically be needed when you upgrade from one
+version of Slony-I to another.&#13;</P
+></DIV
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="app-slonik.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="slonstart.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><B
+CLASS="APPLICATION"
+>slonik</B
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+>&nbsp;</TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slon daemons</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -1,17 +1,20 @@
 <qandaset>
 
 <qandaentry>
+
 <question><para>I looked for the <envar/_clustername/ namespace, and
 it wasn't there.</question>
 
-<answer><para> If the DSNs are wrong, then slon instances can't connect to the nodes.
+<answer><para> If the DSNs are wrong, then slon instances can't
+connect to the nodes.
 
 <para>This will generally lead to nodes remaining entirely untouched.
 
-<para>Recheck the connection configuration.  By the way, since
-<application/slon/ links to libpq, you could have password information
-stored in <filename> <envar>$HOME</envar>/.pgpass</filename>,
-partially filling in right/wrong authentication information there.
+<para>Recheck the connection configuration.  By the way, since <link
+linkend="slon"> <application/slon/ </link> links to libpq, you could
+have password information stored in <filename>
+<envar>$HOME</envar>/.pgpass</filename>, partially filling in
+right/wrong authentication information there.
 </answer>
 </qandaentry>
 
@@ -27,7 +30,7 @@
 </screen>
 
 <answer><para>
-On AIX and Solaris (and possibly elsewhere), both Slony-I <emphasis/and PostgreSQL/ must be compiled with the <option/--enable-thread-safety/ option.  The above results when PostgreSQL isn't so compiled.
+On AIX and Solaris (and possibly elsewhere), both <productname/Slony-I/ <emphasis/and <productname/PostgreSQL// must be compiled with the <option/--enable-thread-safety/ option.  The above results when <productname/PostgreSQL/ isn't so compiled.
 
 <para>What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby leading to the request failing.
 
@@ -38,7 +41,7 @@
 
 <para>For instance, I ran into the problem one that
 <envar/LD_LIBRARY_PATH/ had been set, on Solaris, to point to
-libraries from an old PostgreSQL compile.  That meant that even though
+libraries from an old <productname/PostgreSQL/ compile.  That meant that even though
 the database <emphasis/had/ been compiled with
 <option/--enable-thread-safety/, and <application/slon/ had been
 compiled against that, <application/slon/ was being dynamically linked
@@ -50,7 +53,7 @@
 <question> <para>I tried creating a CLUSTER NAME with a "-" in it.
 That didn't work.
 
-<answer><Para> Slony-I uses the same rules for unquoted identifiers as the PostgreSQL
+<answer><Para> <productname/Slony-I/ uses the same rules for unquoted identifiers as the <productname/PostgreSQL/
 main parser, so no, you probably shouldn't put a "-" in your
 identifier name.
 
@@ -103,7 +106,7 @@
 <filename><envar/$(HOME)//.pgpass.</filename>
 
 <qandaentry>
-<question><Para>Slonik fails - cannot load PostgreSQL library - <command>PGRES_FATAL_ERROR load '$libdir/xxid';</command>
+<question><Para>Slonik fails - cannot load <productname/PostgreSQL/ library - <command>PGRES_FATAL_ERROR load '$libdir/xxid';</command>
 
 <para> When I run the sample setup script I get an error message similar
 to:
@@ -114,24 +117,25 @@
 </command>
 
 <answer><para> Evidently, you haven't got the <filename/xxid.so/
-library in the <envar/$libdir/ directory that the PostgreSQL instance
-is using.  Note that the Slony-I components need to be installed in
-the PostgreSQL software installation for <emphasis/each and every one/
-of the nodes, not just on the <quote/master node./
+library in the <envar/$libdir/ directory that the <productname/PostgreSQL/ instance
+is using.  Note that the <productname/Slony-I/ components need to be installed in
+the <productname/PostgreSQL/ software installation for <emphasis/each and every one/
+of the nodes, not just on the origin node.
 
 <para>This may also point to there being some other mismatch between
-the PostgreSQL binary instance and the Slony-I instance.  If you
-compiled Slony-I yourself, on a machine that may have multiple
-PostgreSQL builds <quote/lying around,/ it's possible that the slon or
+the <productname/PostgreSQL/ binary instance and the <productname/Slony-I/ instance.  If you
+compiled <productname/Slony-I/ yourself, on a machine that may have multiple
+<productname/PostgreSQL/ builds <quote/lying around,/ it's possible that the slon or
 slonik binaries are asking to load something that isn't actually in
-the library directory for the PostgreSQL database cluster that it's
+the library directory for the <productname/PostgreSQL/ database cluster that it's
 hitting.
 
 <para>Long and short: This points to a need to <quote/audit/ what
-installations of PostgreSQL and Slony you have in place on the
-machine(s).  Unfortunately, just about any mismatch will cause things
-not to link up quite right.  See also <link linkend="SlonyFAQ02">
-SlonyFAQ02 </link> concerning threading issues on Solaris ...
+installations of <productname/PostgreSQL/ and <productname/Slony-I/
+you have in place on the machine(s).  Unfortunately, just about any
+mismatch will cause things not to link up quite right.  See also <link
+linkend="SlonyFAQ02"> SlonyFAQ02 </link> concerning threading issues
+on Solaris ...
 
 <qandaentry>
 <question><Para>Table indexes with FQ namespace names
@@ -161,7 +165,7 @@
 <para>Oops.  What I forgot to mention, as well, was that I was trying
 to add <emphasis/TWO/ subscribers, concurrently.
 
-<answer><para> That doesn't work out: Slony-I won't work on the
+<answer><para> That doesn't work out: <productname/Slony-I/ won't work on the
 <command/COPY/ commands concurrently.  See
 <filename>src/slon/remote_worker.c</filename>, function
 <function/copy_set()/
@@ -173,7 +177,7 @@
 setting up the subscription.
 
 <para>It could also be possible for there to be an old outstanding
-transaction blocking Slony-I from processing the sync.  You might want
+transaction blocking <productname/Slony-I/ from processing the sync.  You might want
 to take a look at pg_locks to see what's up:
 
 <screen>
@@ -197,11 +201,11 @@
 setting up the first subscriber; it won't start on the second one
 until the first one has completed subscribing.
 
-<para>By the way, if there is more than one database on the PostgreSQL
+<para>By the way, if there is more than one database on the <productname/PostgreSQL/
 cluster, and activity is taking place on the OTHER database, that will
 lead to there being <quote/transactions earlier than XID whatever/ being
 found to be still in progress.  The fact that it's a separate database
-on the cluster is irrelevant; Slony-I will wait until those old
+on the cluster is irrelevant; <productname/Slony-I/ will wait until those old
 transactions terminate.
 <qandaentry>
 <question><Para>
@@ -238,16 +242,17 @@
   delete from _slonyschema.sl_table where tab_id = 40;
 </programlisting>
 
-<para>The schema will obviously depend on how you defined the Slony-I
+<para>The schema will obviously depend on how you defined the <productname/Slony-I/
 cluster.  The table ID, in this case, 40, will need to change to the
 ID of the table you want to have go away.
 
 You'll have to run these three queries on all of the nodes, preferably
-firstly on the "master" node, so that the dropping of this propagates
-properly.  Implementing this via a SLONIK statement with a new Slony
-event would do that.  Submitting the three queries using EXECUTE
-SCRIPT could do that.  Also possible would be to connect to each
-database and submit the queries by hand.
+firstly on the origin node, so that the dropping of this propagates
+properly.  Implementing this via a <link linkend="slonik"> slonik
+</link> statement with a new <productname/Slony-I/ event would do
+that.  Submitting the three queries using <command/EXECUTE SCRIPT/
+could do that.  Also possible would be to connect to each database and
+submit the queries by hand.
 </itemizedlist>
 <qandaentry>
 <question><Para>I need to drop a sequence from a replication set
@@ -284,10 +289,10 @@
 the sequence everywhere <quote/at once./ Or they may be applied by
 hand to each of the nodes.
 
-<para>Similarly to <command/SET DROP TABLE/, this should be in place for Slony-I version
+<para>Similarly to <command/SET DROP TABLE/, this should be in place for <productname/Slony-I/ version
 1.0.5 as <command/SET DROP SEQUENCE./
 <qandaentry>
-<question><Para>Slony-I: cannot add table to currently subscribed set 1
+<question><Para><productname/Slony-I/: cannot add table to currently subscribed set 1
 
 <para> I tried to add a table to a set, and got the following message:
 
@@ -305,7 +310,7 @@
 <qandaentry>
 <question><Para>Some nodes start consistently falling behind
 
-<para>I have been running Slony-I on a node for a while, and am seeing
+<para>I have been running <productname/Slony-I/ on a node for a while, and am seeing
 system performance suffering.
 
 <para>I'm seeing long running queries of the form:
@@ -322,7 +327,7 @@
 
 <para> Slon daemons already vacuum a bunch of tables, and
 <filename/cleanup_thread.c/ contains a list of tables that are
-frequently vacuumed automatically.  In Slony-I 1.0.2,
+frequently vacuumed automatically.  In <productname/Slony-I/ 1.0.2,
 <envar/pg_listener/ is not included.  In 1.0.5 and later, it is
 regularly vacuumed, so this should cease to be a direct issue.
 
@@ -338,9 +343,9 @@
 <answer><para>Ouch.  What happens here is a conflict between:
 <itemizedlist>
 
-<listitem><para> <application/pg_dump/, which has taken out an <command/AccessShareLock/ on all of the tables in the database, including the Slony-I ones, and
+<listitem><para> <application/pg_dump/, which has taken out an <command/AccessShareLock/ on all of the tables in the database, including the <productname/Slony-I/ ones, and
 
-<listitem><para> A Slony-I sync event, which wants to grab a <command/AccessExclusiveLock/ on	 the table <envar/sl_event/.
+<listitem><para> A <productname/Slony-I/ sync event, which wants to grab a <command/AccessExclusiveLock/ on	 the table <envar/sl_event/.
 </itemizedlist>
 
 <para>The initial query that will be blocked is thus:
@@ -384,34 +389,37 @@
 <question><Para>The slons spent the weekend out of commission [for
 some reason], and it's taking a long time to get a sync through.
 
-<answer><para>
-You might want to take a look at the sl_log_1/sl_log_2 tables, and do
-a summary to see if there are any really enormous Slony-I transactions
-in there.  Up until at least 1.0.2, there needs to be a slon connected
-to the master in order for <command/SYNC/ events to be generated.
+<answer><para> You might want to take a look at the sl_log_1/sl_log_2
+tables, and do a summary to see if there are any really enormous
+<productname/Slony-I/ transactions in there.  Up until at least 1.0.2,
+there needs to be a slon connected to the origin in order for
+<command/SYNC/ events to be generated.
 
 <para>If none are being generated, then all of the updates until the next
-one is generated will collect into one rather enormous Slony-I
+one is generated will collect into one rather enormous <productname/Slony-I/
 transaction.
 
-<para>Conclusion: Even if there is not going to be a subscriber around, you
-<emphasis/really/ want to have a slon running to service the <quote/master/ node.
-
-<para>Some future version (probably 1.1) may provide a way for
-<command/SYNC/ counts to be updated on the master by the stored
-function that is invoked by the table triggers.
+<para>Conclusion: Even if there is not going to be a subscriber
+around, you <emphasis/really/ want to have a slon running to service
+the origin node.
+
+<para><productname/Slony-I/ 1.1 provides a stored procedure that
+allows <command/SYNC/ counts to be updated on the origin based on a
+<application/cron/ job even if there is no <link linkend="slon"> slon
+</link> daemon running.
 
 <qandaentry>
-<question><Para>I pointed a subscribing node to a different parent and it stopped replicating
+<question><Para>I pointed a subscribing node to a different provider
+and it stopped replicating
 
 <answer><para>
 We noticed this happening when we wanted to re-initialize a node,
 where we had configuration thus:
 
 <itemizedlist>
-<listitem><para> Node 1 - master
-<listitem><para> Node 2 - child of node 1 - the node we're reinitializing
-<listitem><para> Node 3 - child of node 3 - node that should keep replicating
+<listitem><para> Node 1 - provider
+<listitem><para> Node 2 - subscriber to node 1 - the node we're reinitializing
+<listitem><para> Node 3 - subscriber to node 3 - node that should keep replicating
 </itemizedlist>
 
 <para>The subscription for node 3 was changed to have node 1 as
@@ -462,12 +470,12 @@
 <para>The issues of "listener paths" are discussed further at <link
 linkend="ListenPaths"> Slony Listen Paths </link>
 
-<qandaentry>
+<qandaentry id="faq17">
 <question><Para>After dropping a node, sl_log_1 isn't getting purged out anymore.
 
 <answer><para> This is a common scenario in versions before 1.0.5, as
 the "clean up" that takes place when purging the node does not include
-purging out old entries from the Slony-I table, sl_confirm, for the
+purging out old entries from the <productname/Slony-I/ table, sl_confirm, for the
 recently departed node.
 
 <para> The node is no longer around to update confirmations of what
@@ -551,10 +559,10 @@
 ERROR  remoteWorkerThread_1: SYNC aborted
 </screen>
 
-<para>The transaction rolls back, and Slony-I tries again, and again,
+<para>The transaction rolls back, and <productname/Slony-I/ tries again, and again,
 and again.  The problem is with one of the <emphasis/last/ SQL statements, the
 one with <command/log_cmdtype = 'I'/.  That isn't quite obvious; what takes
-place is that Slony-I groups 10 update queries together to diminish
+place is that <productname/Slony-I/ groups 10 update queries together to diminish
 the number of network round trips.
 
 <answer><para>
@@ -571,7 +579,7 @@
 over, as well as when temporary network failure seemed likely.
 
 <listitem><para> The scenario seems to involve a delete transaction
-having been missed by Slony-I.
+having been missed by <productname/Slony-I/.
 
 </itemizedlist>
 
@@ -582,7 +590,7 @@
 <para>What is necessary, at this point, is to drop the replication set
 (or even the node), and restart replication from scratch on that node.
 
-<para>In Slony-I 1.0.5, the handling of purges of sl_log_1 are rather
+<para>In <productname/Slony-I/ 1.0.5, the handling of purges of sl_log_1 are rather
 more conservative, refusing to purge entries that haven't been
 successfully synced for at least 10 minutes on all nodes.  It is not
 certain that that will prevent the "glitch" from taking place, but it
Index: dropthings.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/dropthings.html -Ldoc/adminguide/dropthings.html -u -w -r1.2 -r1.3
--- doc/adminguide/dropthings.html
+++ doc/adminguide/dropthings.html
@@ -82,61 +82,104 @@
 >10. Dropping things from Slony Replication</A
 ></H1
 ><P
->There are several things you might want to do involving dropping things from Slony-I replication.&#13;</P
+>There are several things you might want to do involving dropping
+things from <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> replication.&#13;</P
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN817"
+NAME="AEN995"
 >10.1. Dropping A Whole Node</A
 ></H2
 ><P
->If you wish to drop an entire node from replication, the Slonik command DROP NODE should do the trick.  &#13;</P
-><P
->This will lead to Slony-I dropping the triggers (generally that deny the ability to update data), restoring the "native" triggers, dropping the schema used by Slony-I, and the slon process for that node terminating itself.&#13;</P
-><P
->As a result, the database should be available for whatever use your application makes of the database.&#13;</P
-><P
->This is a pretty major operation, with considerable potential to cause substantial destruction; make sure you drop the right node!&#13;</P
+>If you wish to drop an entire node from replication, the <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> command <TT
+CLASS="COMMAND"
+>DROP NODE</TT
+> should do
+the trick.&#13;</P
 ><P
->The operation will fail if there are any nodes subscribing to the node that you attempt to drop, so there is a bit of failsafe.&#13;</P
+>This will lead to <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> dropping the triggers
+(generally that deny the ability to update data), restoring the
+"native" triggers, dropping the schema used by <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>,
+and the slon process for that node terminating itself.&#13;</P
+><P
+>As a result, the database should be available for whatever use
+your application makes of the database.&#13;</P
+><P
+>This is a pretty major operation, with considerable potential to
+cause substantial destruction; make sure you drop the right node!&#13;</P
+><P
+>The operation will fail if there are any nodes subscribing to
+the node that you attempt to drop, so there is a bit of a failsafe to
+protect you from errors.&#13;</P
 ><P
->SlonyFAQ17 documents some extra maintenance that may need to be done on sl_confirm if you are running versions prior to 1.0.5.&#13;</P
+><A
+HREF="faq.html#FAQ17"
+> sl_log_1 isn't getting purged </A
+>
+documents some extra maintenance that may need to be done on
+sl_confirm if you are running versions prior to 1.0.5.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN825"
+NAME="AEN1008"
 >10.2. Dropping An Entire Set</A
 ></H2
 ><P
 >If you wish to stop replicating a particular replication set,
-the Slonik command <TT
+the <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> command <TT
 CLASS="COMMAND"
 >DROP SET</TT
-> is what you need to use.&#13;</P
+>
+is what you need to use.&#13;</P
 ><P
 >Much as with <TT
 CLASS="COMMAND"
 >DROP NODE</TT
->, this leads to Slony-I dropping
-the Slony-I triggers on the tables and restoring <SPAN
+>, this leads to
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> dropping the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> triggers on
+the tables and restoring <SPAN
 CLASS="QUOTE"
 >"native"</SPAN
->
-triggers.  One difference is that this takes place on <SPAN
+> triggers.  One difference is
+that this takes place on <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >all</I
 ></SPAN
->
-nodes in the cluster, rather than on just one node.  Another
-difference is that this does not clear out the Slony-I cluster's
-namespace, as there might be other sets being serviced.&#13;</P
+> nodes in the cluster, rather
+than on just one node.  Another difference is that this does not clear
+out the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster's namespace, as there might be
+other sets being serviced.&#13;</P
 ><P
 >This operation is quite a bit more dangerous than <TT
 CLASS="COMMAND"
@@ -148,8 +191,11 @@
 CLASS="EMPHASIS"
 >isn't</I
 ></SPAN
-> the same sort of "failsafe."  If you
-tell <TT
+> the same sort of <SPAN
+CLASS="QUOTE"
+>"failsafe."</SPAN
+> If
+you tell <TT
 CLASS="COMMAND"
 >DROP SET</TT
 > to drop the <SPAN
@@ -158,15 +204,19 @@
 CLASS="EMPHASIS"
 >wrong</I
 ></SPAN
-> set, there isn't
-anything to prevent "unfortunate results."&#13;</P
+> set, there
+isn't anything to prevent potentially career-limiting
+<SPAN
+CLASS="QUOTE"
+>"unfortunate results."</SPAN
+> Handle with care...&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN838"
+NAME="AEN1027"
 >10.3. Unsubscribing One Node From One Set</A
 ></H2
 ><P
@@ -181,13 +231,17 @@
 CLASS="COMMAND"
 >DROP NODE</TT
 >; it
-involves dropping Slony-I triggers and restoring "native" triggers on
-one node, for one replication set.&#13;</P
+involves dropping <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> triggers and restoring
+"native" triggers on one node, for one replication set.&#13;</P
 ><P
 >Much like with <TT
 CLASS="COMMAND"
 >DROP NODE</TT
->, this operation will fail if there is a node subscribing to the set on this node. 
+>, this operation will fail if
+there is a node subscribing to the set on this node.
 
 <DIV
 CLASS="WARNING"
@@ -222,8 +276,11 @@
 ></SPAN
 > fresh set of
 the data on a provider.  The fact that the data was recently being
-replicated isn't good enough; Slony-I will expect to refresh the data
-from scratch.</P
+replicated isn't good enough; <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> will expect to
+refresh the data from scratch.</P
 ></TD
 ></TR
 ></TABLE
@@ -235,21 +292,26 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN850"
+NAME="AEN1041"
 >10.4. Dropping A Table From A Set</A
 ></H2
 ><P
->In Slony 1.0.5 and above, there is a Slonik command <TT
+>In <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> 1.0.5 and above, there is a Slonik
+command <TT
 CLASS="COMMAND"
->SET
-DROP TABLE</TT
-> that allows dropping a single table from replication
-without forcing the user to drop the entire replication set.&#13;</P
+>SET DROP TABLE</TT
+> that allows dropping a single table
+from replication without forcing the user to drop the entire
+replication set.&#13;</P
 ><P
 >If you are running an earlier version, there is a <SPAN
 CLASS="QUOTE"
 >"hack"</SPAN
-> to do this:&#13;</P
+>
+to do this:&#13;</P
 ><P
 >You can fiddle this by hand by finding the table ID for the
 table you want to get rid of, which you can find in sl_table, and then
@@ -271,23 +333,38 @@
 ></TABLE
 >&#13;</P
 ><P
->The schema will obviously depend on how you defined the Slony-I
-cluster.  The table ID, in this case, 40, will need to change to the
-ID of the table you want to have go away.&#13;</P
+>The schema will obviously depend on how you defined the
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster.  The table ID, in this case, 40, will
+need to change to the ID of the table you want to have go away.&#13;</P
 ><P
 >You'll have to run these three queries on all of the nodes,
-preferably firstly on the "master" node, so that the dropping of this
-propagates properly.  Implementing this via a Slonik statement with a
-new Slony event would do that.  Submitting the three queries using
-EXECUTE SCRIPT could do that.  Also possible would be to connect to
-each database and submit the queries by hand.&#13;</P
+preferably firstly on the origin node, so that the dropping of this
+propagates properly.  Implementing this via a <A
+HREF="app-slonik.html#SLONIK"
+>slonik </A
+> statement with a new <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> event would
+do that.  Submitting the three queries using <TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+>
+could do that; see <A
+HREF="ddlchanges.html"
+> Database Schema Changes</A
+> for more details.  Also possible would be to connect to each
+database and submit the queries by hand.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN860"
+NAME="AEN1057"
 >10.5. Dropping A Sequence From A Set</A
 ></H2
 ><P
@@ -303,9 +380,12 @@
 >If you are running an earlier version, here are instructions as
 to how to drop sequences:&#13;</P
 ><P
->The data that needs to be deleted to stop Slony from continuing
-to replicate the two sequences identified with Sequence IDs 93 and 59
-are thus:
+>The data that needs to be deleted to stop <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+from continuing to replicate the two sequences identified with
+Sequence IDs 93 and 59 are thus:
 
 <TABLE
 BORDER="0"
@@ -329,13 +409,12 @@
 > / <TT
 CLASS="COMMAND"
 >EXECUTE SCRIPT</TT
->, thus eliminating
-the sequence everywhere "at once."  Or they may be applied by hand to
-each of the nodes.
-
-
-
- </P
+>,
+thus eliminating the sequence everywhere <SPAN
+CLASS="QUOTE"
+>"at once."</SPAN
+> Or
+they may be applied by hand to each of the nodes.</P
 ></DIV
 ></DIV
 ><DIV
Index: ddlchanges.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/ddlchanges.sgml
+++ doc/adminguide/ddlchanges.sgml
@@ -5,57 +5,65 @@
 rather carefully, otherwise different nodes may get rather deranged
 because they disagree on how particular tables are built.
 
-<para>If you pass the changes through Slony-I via the <command/EXECUTE
-SCRIPT/ (slonik) / <function/ddlscript(set,script,node)/ (stored
-function), this allows you to be certain that the changes take effect
-at the same point in the transaction streams on all of the nodes.
-That may not be too important if you can take something of an outage
-to do schema changes, but if you want to do upgrades that take place
-while transactions are still firing their way through your systems,
-it's necessary.
+<para>If you pass the changes through <productname/Slony-I/ via the
+<command/EXECUTE SCRIPT/ (slonik) /
+<function/ddlscript(set,script,node)/ (stored function), this allows
+you to be certain that the changes take effect at the same point in
+the transaction streams on all of the nodes.  That may not be too
+important if you can take something of an outage to do schema changes,
+but if you want to do upgrades that take place while transactions are
+still winding their way through your systems, this is necessary.
 
 <para>It's worth making a couple of comments on <quote/special things/
 about <command/EXECUTE SCRIPT/:
 
 <itemizedlist>
 
-<Listitem><Para> The script must not contain transaction
-<command/BEGIN/ or <command/END/ statements, as the script is already
-executed inside a transaction.  In PostgreSQL version 8, the
-introduction of nested transactions may change this requirement
-somewhat, but you must still remain aware that the actions in the
-script are wrapped inside a transaction.
-
-<Listitem><Para> If there is <emphasis/anything/ broken about the
-script, or about how it executes on a particular node, this will cause
-the slon daemon for that node to panic and crash. If you restart the
-node, it will, more likely than not, try to <emphasis/repeat/ the DDL
-script, which will, almost certainly, fail the second time just as it
-did the first time.  I have found this scenario to lead to a need to
-go to the <quote/master/ node to delete the event to stop it from
-continuing to fail.
-
-<Listitem><Para> For slon to, at that point, <quote/panic/ is probably
-the <emphasis/correct/ answer, as it allows the DBA to head over to
-the database node that is broken, and manually fix things before
-cleaning out the defective event and restarting slon.  You can be
-certain that the updates made <emphasis/after/ the DDL change on the
-provider node are queued up, waiting to head to the subscriber.  You
-don't run the risk of there being updates made that depended on the
-DDL changes in order to be correct.
+<listitem><para> The script must not contain transaction
+<command>BEGIN</command> or <command>END</command> statements, as the
+script is already executed inside a transaction.  In
+<productname>PostgreSQL</productname> version 8, the introduction of
+nested transactions may change this requirement somewhat, but you must
+still remain aware that the actions in the script are wrapped inside a
+transaction.</para></listitem>
+
+<listitem><para> If there is <emphasis>anything</emphasis> broken
+about the script, or about how it executes on a particular node, this
+will cause the <link linkend="slon"> <application>slon</application>
+</link> daemon for that node to panic and crash. If you restart the
+node, it will, more likely than not, try to
+<emphasis>repeat</emphasis> the DDL script, which will, almost
+certainly, fail the second time just as it did the first time.  I have
+found this scenario to lead to a need to go to the
+<quote>master</quote> node to delete the event to stop it from
+continuing to fail.</para></listitem>
+
+<listitem><para> For <application>slon</application> to, at that
+point, <quote>panic</quote> is probably the
+<emphasis>correct</emphasis> answer, as it allows the DBA to head over
+to the database node that is broken, and manually fix things before
+cleaning out the defective event and restarting
+<application>slon</application>.  You can be certain that the updates
+made <emphasis>after</emphasis> the DDL change on the provider node
+are queued up, waiting to head to the subscriber.  You don't run the
+risk of there being updates made that depended on the DDL changes in
+order to be correct.</para></listitem>
 
 </itemizedlist>
 
 <para>Unfortunately, this nonetheless implies that the use of the DDL
-facility is somewhat fragile and dangerous.  Making DDL changes should
-not be done in a sloppy or cavalier manner.  If your applications do
-not have fairly stable SQL schemas, then using Slony-I for replication
-is likely to be fraught with trouble and frustration.
+facility is somewhat fragile and fairly dangerous.  Making DDL changes
+must not be done in a sloppy or cavalier manner.  If your applications
+do not have fairly stable SQL schemas, then using
+<productname>Slony-I</productname> for replication is likely to be
+fraught with trouble and frustration.</para>
 
-<para>There is an article on how to manage Slony schema changes here:
+<para>There is an article on how to manage Slony-I schema changes
+here:
 <ulink url="http://www.varlena.com/varlena/GeneralBits/88.php">
-Varlena General Bits</ulink>
+Varlena General Bits</ulink></para>
 
+</sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
Index: subscribenodes.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/subscribenodes.html -Ldoc/adminguide/subscribenodes.html -u -w -r1.2 -r1.3
--- doc/adminguide/subscribenodes.html
+++ doc/adminguide/subscribenodes.html
@@ -85,18 +85,24 @@
 >Before you subscribe a node to a set, be sure that you have
 <B
 CLASS="APPLICATION"
->slon</B
->s running for both the master and the new
-subscribing node. If you don't have slons running, nothing will
-happen, and you'll beat your head against a wall trying to figure out
-what is going on.&#13;</P
-><P
->Subscribing a node to a set is done by issuing the slonik
-command <TT
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+>
+processes running for both the provider and the new subscribing node. If
+you don't have slons running, nothing will happen, and you'll beat
+your head against a wall trying to figure out what is going on.&#13;</P
+><P
+>Subscribing a node to a set is done by issuing the <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> command <TT
 CLASS="COMMAND"
 >subscribe set</TT
->. It may seem tempting to try to
-subscribe several nodes to a set within a single try block like this:
+>. It
+may seem tempting to try to subscribe several nodes to a set within a
+single try block like this:
 
 <TABLE
 BORDER="0"
@@ -121,20 +127,26 @@
 >
 &#13;</P
 ><P
-> You are just asking for trouble if you try to subscribe sets in
-that fashion. The proper procedure is to subscribe one node at a time,
-and to check the logs and databases before you move onto subscribing
-the next node to the set. It is also worth noting that the
+> But you are just asking for trouble if you try to subscribe
+sets in that fashion. The proper procedure is to subscribe one node at
+a time, and to check the logs and databases before you move onto
+subscribing the next node to the set. It is also worth noting that the
 <SPAN
 CLASS="QUOTE"
 >"success"</SPAN
-> within the above slonik try block does not imply that
-nodes 2, 3, and 4 have all been successfully subscribed. It merely
-indicates that the slonik commands were successfully received by the
+> within the above <A
+HREF="app-slonik.html#SLONIK"
+><B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+> try block does not imply that nodes 2, 3,
+and 4 have all been successfully subscribed. It merely indicates that
+the slonik commands were successfully received by the
 <B
 CLASS="APPLICATION"
 >slon</B
-> running on the master node.&#13;</P
+> running on the origin node.&#13;</P
 ><P
 >A typical sort of problem that will arise is that a cascaded
 subscriber is looking for a provider that is not ready yet.  In that
@@ -158,7 +170,10 @@
 node is stuck on the attempt to subscribe it.&#13;</P
 ><P
 >When you subscribe a node to a set, you should see something
-like this in your slony logs for the master node:
+like this in your <B
+CLASS="APPLICATION"
+>slon</B
+> logs for the provider node:
 
 <TABLE
 BORDER="0"
@@ -174,7 +189,11 @@
 ></TABLE
 >&#13;</P
 ><P
->You should also start seeing log entries like this in the slony logs for the subscribing node:
+>You should also start seeing log entries like this in the
+<B
+CLASS="APPLICATION"
+>slon</B
+> logs for the subscribing node:
 
 <TABLE
 BORDER="0"
@@ -191,14 +210,14 @@
 >&#13;</P
 ><P
 >It may take some time for larger tables to be copied from the
-master node to the new subscriber. If you check the pg_stat_activity
-table on the master node, you should see a query that is copying the
+provider node to the new subscriber. If you check the pg_stat_activity
+table on the provider node, you should see a query that is copying the
 table to stdout.&#13;</P
 ><P
 >The table <CODE
 CLASS="ENVAR"
 >sl_subscribe</CODE
-> on both the master, and the new
+> on both the provider, and the new
 subscriber should contain entries for the new subscription:
 
 <TABLE
@@ -218,12 +237,8 @@
 >&#13;</P
 ><P
 >A final test is to insert a row into one of the replicated
-tables on the master node, and verify that the row is copied to the
-new subscriber.
-
-
-
- </P
+tables on the origin node, and verify that the row is copied to the
+new subscriber.</P
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
Index: slony.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/slony.html -Ldoc/adminguide/slony.html -u -w -r1.2 -r1.3
--- doc/adminguide/slony.html
+++ doc/adminguide/slony.html
@@ -171,22 +171,26 @@
 ><DL
 ><DT
 >2.1. <A
-HREF="requirements.html#AEN109"
+HREF="requirements.html#AEN117"
 >Software needed</A
 ></DT
 ><DT
 >2.2. <A
-HREF="requirements.html#AEN136"
->Getting Slony-I Source</A
+HREF="requirements.html#AEN146"
+>Getting <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+Source</A
 ></DT
 ><DT
 >2.3. <A
-HREF="requirements.html#AEN140"
+HREF="requirements.html#AEN152"
 >Time Synchronization</A
 ></DT
 ><DT
 >2.4. <A
-HREF="requirements.html#AEN148"
+HREF="requirements.html#AEN161"
 >Network Connectivity</A
 ></DT
 ></DL
@@ -200,27 +204,27 @@
 ><DL
 ><DT
 >3.1. <A
-HREF="installation.html#AEN175"
+HREF="installation.html#AEN196"
 >Short Version</A
 ></DT
 ><DT
 >3.2. <A
-HREF="installation.html#AEN179"
+HREF="installation.html#AEN200"
 >Configuration</A
 ></DT
 ><DT
 >3.3. <A
-HREF="installation.html#AEN182"
+HREF="installation.html#AEN209"
 >Example</A
 ></DT
 ><DT
 >3.4. <A
-HREF="installation.html#AEN187"
+HREF="installation.html#AEN214"
 >Build</A
 ></DT
 ><DT
 >3.5. <A
-HREF="installation.html#AEN194"
+HREF="installation.html#AEN223"
 >Installing Slony-I</A
 ></DT
 ></DL
@@ -234,23 +238,23 @@
 ><DL
 ><DT
 >4.1. <A
-HREF="concepts.html#AEN212"
+HREF="concepts.html#AEN241"
 >Cluster</A
 ></DT
 ><DT
 >4.2. <A
-HREF="concepts.html#AEN220"
+HREF="concepts.html#AEN249"
 >Node</A
 ></DT
 ><DT
 >4.3. <A
-HREF="concepts.html#AEN233"
+HREF="concepts.html#AEN262"
 >Replication Set</A
 ></DT
 ><DT
 >4.4. <A
-HREF="concepts.html#AEN238"
->Provider and Subscriber</A
+HREF="concepts.html#AEN267"
+>Origin, Providers and Subscribers</A
 ></DT
 ></DL
 ></DD
@@ -269,12 +273,12 @@
 ><DL
 ><DT
 >6.1. <A
-HREF="definingsets.html#AEN266"
+HREF="definingsets.html#AEN297"
 >Primary Keys</A
 ></DT
 ><DT
 >6.2. <A
-HREF="definingsets.html#AEN290"
+HREF="definingsets.html#AEN327"
 >Grouping tables into sets</A
 ></DT
 ></DL
@@ -290,7 +294,7 @@
 ><DL
 ><DT
 ><A
-HREF="slon.html"
+HREF="app-slon.html"
 ><B
 CLASS="APPLICATION"
 >slon</B
@@ -302,7 +306,7 @@
     </DT
 ><DT
 ><A
-HREF="slonik.html"
+HREF="app-slonik.html"
 ><B
 CLASS="APPLICATION"
 >slonik</B
@@ -330,117 +334,117 @@
 ><DL
 ><DT
 >1.1. <A
-HREF="slonyadmin.html#AEN454"
+HREF="slonyadmin.html#AEN508"
 >Node/Cluster Configuration - cluster.nodes</A
 ></DT
 ><DT
 >1.2. <A
-HREF="slonyadmin.html#AEN479"
+HREF="slonyadmin.html#AEN534"
 >Set configuration - cluster.set1, cluster.set2</A
 ></DT
 ><DT
 >1.3. <A
-HREF="slonyadmin.html#AEN499"
+HREF="slonyadmin.html#AEN560"
 >build_env.pl</A
 ></DT
 ><DT
 >1.4. <A
-HREF="slonyadmin.html#AEN512"
+HREF="slonyadmin.html#AEN573"
 >create_set.pl</A
 ></DT
 ><DT
 >1.5. <A
-HREF="slonyadmin.html#AEN517"
+HREF="slonyadmin.html#AEN578"
 >drop_node.pl</A
 ></DT
 ><DT
 >1.6. <A
-HREF="slonyadmin.html#AEN520"
+HREF="slonyadmin.html#AEN581"
 >drop_set.pl</A
 ></DT
 ><DT
 >1.7. <A
-HREF="slonyadmin.html#AEN524"
+HREF="slonyadmin.html#AEN585"
 >failover.pl</A
 ></DT
 ><DT
 >1.8. <A
-HREF="slonyadmin.html#AEN527"
+HREF="slonyadmin.html#AEN588"
 >init_cluster.pl</A
 ></DT
 ><DT
 >1.9. <A
-HREF="slonyadmin.html#AEN530"
+HREF="slonyadmin.html#AEN591"
 >merge_sets.pl</A
 ></DT
 ><DT
 >1.10. <A
-HREF="slonyadmin.html#AEN533"
+HREF="slonyadmin.html#AEN594"
 >move_set.pl</A
 ></DT
 ><DT
 >1.11. <A
-HREF="slonyadmin.html#AEN536"
+HREF="slonyadmin.html#AEN597"
 >replication_test.pl</A
 ></DT
 ><DT
 >1.12. <A
-HREF="slonyadmin.html#AEN539"
+HREF="slonyadmin.html#AEN600"
 >restart_node.pl</A
 ></DT
 ><DT
 >1.13. <A
-HREF="slonyadmin.html#AEN542"
+HREF="slonyadmin.html#AEN603"
 >restart_nodes.pl</A
 ></DT
 ><DT
 >1.14. <A
-HREF="slonyadmin.html#AEN545"
+HREF="slonyadmin.html#AEN606"
 >show_configuration.pl</A
 ></DT
 ><DT
 >1.15. <A
-HREF="slonyadmin.html#AEN549"
+HREF="slonyadmin.html#AEN610"
 >slon_kill.pl</A
 ></DT
 ><DT
 >1.16. <A
-HREF="slonyadmin.html#AEN552"
+HREF="slonyadmin.html#AEN613"
 >slon_pushsql.pl</A
 ></DT
 ><DT
 >1.17. <A
-HREF="slonyadmin.html#AEN555"
+HREF="slonyadmin.html#AEN616"
 >slon_start.pl</A
 ></DT
 ><DT
 >1.18. <A
-HREF="slonyadmin.html#AEN558"
+HREF="slonyadmin.html#AEN619"
 >slon_watchdog.pl</A
 ></DT
 ><DT
 >1.19. <A
-HREF="slonyadmin.html#AEN561"
+HREF="slonyadmin.html#AEN622"
 >slon_watchdog2.pl</A
 ></DT
 ><DT
 >1.20. <A
-HREF="slonyadmin.html#AEN565"
+HREF="slonyadmin.html#AEN626"
 >subscribe_set.pl</A
 ></DT
 ><DT
 >1.21. <A
-HREF="slonyadmin.html#AEN568"
+HREF="slonyadmin.html#AEN629"
 >uninstall_nodes.pl</A
 ></DT
 ><DT
 >1.22. <A
-HREF="slonyadmin.html#AEN571"
+HREF="slonyadmin.html#AEN632"
 >unsubscribe_set.pl</A
 ></DT
 ><DT
 >1.23. <A
-HREF="slonyadmin.html#AEN574"
+HREF="slonyadmin.html#AEN635"
 >update_nodes.pl</A
 ></DT
 ></DL
@@ -464,12 +468,12 @@
 ><DL
 ><DT
 >4.1. <A
-HREF="monitoring.html#AEN645"
+HREF="monitoring.html#AEN721"
 >CONFIG notices</A
 ></DT
 ><DT
 >4.2. <A
-HREF="monitoring.html#AEN650"
+HREF="monitoring.html#AEN726"
 >DEBUG Notices</A
 ></DT
 ></DL
@@ -483,17 +487,17 @@
 ><DL
 ><DT
 >5.1. <A
-HREF="maintenance.html#AEN665"
+HREF="maintenance.html#AEN748"
 >Watchdogs: Keeping Slons Running</A
 ></DT
 ><DT
 >5.2. <A
-HREF="maintenance.html#AEN669"
+HREF="maintenance.html#AEN755"
 >Alternative to Watchdog: generate_syncs.sh</A
 ></DT
 ><DT
 >5.3. <A
-HREF="maintenance.html#AEN678"
+HREF="maintenance.html#AEN771"
 >Log Files</A
 ></DT
 ></DL
@@ -512,23 +516,23 @@
 ><DL
 ><DT
 >7.1. <A
-HREF="failover.html#AEN700"
+HREF="failover.html#AEN810"
 >Foreword</A
 ></DT
 ><DT
 >7.2. <A
-HREF="failover.html#AEN705"
->Switchover</A
+HREF="failover.html#AEN822"
+>Controlled Switchover</A
 ></DT
 ><DT
 >7.3. <A
-HREF="failover.html#AEN722"
+HREF="failover.html#AEN845"
 >Failover</A
 ></DT
 ><DT
 >7.4. <A
-HREF="failover.html#AEN737"
->After failover, getting back node1</A
+HREF="failover.html#AEN876"
+>After Failover, Reconfiguring node1</A
 ></DT
 ></DL
 ></DD
@@ -541,23 +545,18 @@
 ><DL
 ><DT
 >8.1. <A
-HREF="listenpaths.html#AEN744"
+HREF="listenpaths.html#AEN898"
 >How Listening Can Break</A
 ></DT
 ><DT
 >8.2. <A
-HREF="listenpaths.html#AEN753"
+HREF="listenpaths.html#AEN915"
 >How The Listen Configuration Should Look</A
 ></DT
 ><DT
 >8.3. <A
-HREF="listenpaths.html#AEN783"
->Open Question</A
-></DT
-><DT
->8.4. <A
-HREF="listenpaths.html#AEN788"
->Generating listener entries via heuristics</A
+HREF="listenpaths.html#AEN958"
+>Automated Listen Path Generation</A
 ></DT
 ></DL
 ></DD
@@ -575,27 +574,27 @@
 ><DL
 ><DT
 >10.1. <A
-HREF="dropthings.html#AEN817"
+HREF="dropthings.html#AEN995"
 >Dropping A Whole Node</A
 ></DT
 ><DT
 >10.2. <A
-HREF="dropthings.html#AEN825"
+HREF="dropthings.html#AEN1008"
 >Dropping An Entire Set</A
 ></DT
 ><DT
 >10.3. <A
-HREF="dropthings.html#AEN838"
+HREF="dropthings.html#AEN1027"
 >Unsubscribing One Node From One Set</A
 ></DT
 ><DT
 >10.4. <A
-HREF="dropthings.html#AEN850"
+HREF="dropthings.html#AEN1041"
 >Dropping A Table From A Set</A
 ></DT
 ><DT
 >10.5. <A
-HREF="dropthings.html#AEN860"
+HREF="dropthings.html#AEN1057"
 >Dropping A Sequence From A Set</A
 ></DT
 ></DL
@@ -614,17 +613,17 @@
 ><DL
 ><DT
 >12.1. <A
-HREF="firstdb.html#AEN953"
+HREF="firstdb.html#AEN1161"
 >Creating the pgbenchuser</A
 ></DT
 ><DT
 >12.2. <A
-HREF="firstdb.html#AEN957"
+HREF="firstdb.html#AEN1165"
 >Preparing the databases</A
 ></DT
 ><DT
 >12.3. <A
-HREF="firstdb.html#AEN973"
+HREF="firstdb.html#AEN1185"
 >Configuring the Database for Replication.</A
 ></DT
 ></DL
@@ -638,7 +637,7 @@
 ><DL
 ><DT
 >13.1. <A
-HREF="help.html#AEN1017"
+HREF="help.html#AEN1239"
 >Other Information Sources</A
 ></DT
 ></DL
Index: ddlchanges.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/ddlchanges.html -Ldoc/adminguide/ddlchanges.html -u -w -r1.2 -r1.3
--- doc/adminguide/ddlchanges.html
+++ doc/adminguide/ddlchanges.html
@@ -93,20 +93,23 @@
 rather carefully, otherwise different nodes may get rather deranged
 because they disagree on how particular tables are built.&#13;</P
 ><P
->If you pass the changes through Slony-I via the <TT
+>If you pass the changes through <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> via the
+<TT
 CLASS="COMMAND"
->EXECUTE
-SCRIPT</TT
-> (slonik) / <CODE
+>EXECUTE SCRIPT</TT
+> (slonik) /
+<CODE
 CLASS="FUNCTION"
 >ddlscript(set,script,node)</CODE
-> (stored
-function), this allows you to be certain that the changes take effect
-at the same point in the transaction streams on all of the nodes.
-That may not be too important if you can take something of an outage
-to do schema changes, but if you want to do upgrades that take place
-while transactions are still firing their way through your systems,
-it's necessary.&#13;</P
+> (stored function), this allows
+you to be certain that the changes take effect at the same point in
+the transaction streams on all of the nodes.  That may not be too
+important if you can take something of an outage to do schema changes,
+but if you want to do upgrades that take place while transactions are
+still winding their way through your systems, this is necessary.&#13;</P
 ><P
 >It's worth making a couple of comments on <SPAN
 CLASS="QUOTE"
@@ -129,11 +132,15 @@
 > or <TT
 CLASS="COMMAND"
 >END</TT
-> statements, as the script is already
-executed inside a transaction.  In PostgreSQL version 8, the
-introduction of nested transactions may change this requirement
-somewhat, but you must still remain aware that the actions in the
-script are wrapped inside a transaction.&#13;</P
+> statements, as the
+script is already executed inside a transaction.  In
+<SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> version 8, the introduction of
+nested transactions may change this requirement somewhat, but you must
+still remain aware that the actions in the script are wrapped inside a
+transaction.</P
 ></LI
 ><LI
 ><P
@@ -143,68 +150,85 @@
 CLASS="EMPHASIS"
 >anything</I
 ></SPAN
-> broken about the
-script, or about how it executes on a particular node, this will cause
-the slon daemon for that node to panic and crash. If you restart the
-node, it will, more likely than not, try to <SPAN
+> broken
+about the script, or about how it executes on a particular node, this
+will cause the <A
+HREF="app-slon.html#SLON"
+> <B
+CLASS="APPLICATION"
+>slon</B
+></A
+> daemon for that node to panic and crash. If you restart the
+node, it will, more likely than not, try to
+<SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >repeat</I
 ></SPAN
-> the DDL
-script, which will, almost certainly, fail the second time just as it
-did the first time.  I have found this scenario to lead to a need to
-go to the <SPAN
+> the DDL script, which will, almost
+certainly, fail the second time just as it did the first time.  I have
+found this scenario to lead to a need to go to the
+<SPAN
 CLASS="QUOTE"
 >"master"</SPAN
 > node to delete the event to stop it from
-continuing to fail.&#13;</P
+continuing to fail.</P
 ></LI
 ><LI
 ><P
-> For slon to, at that point, <SPAN
+> For <B
+CLASS="APPLICATION"
+>slon</B
+> to, at that
+point, <SPAN
 CLASS="QUOTE"
 >"panic"</SPAN
-> is probably
-the <SPAN
+> is probably the
+<SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >correct</I
 ></SPAN
-> answer, as it allows the DBA to head over to
-the database node that is broken, and manually fix things before
-cleaning out the defective event and restarting slon.  You can be
-certain that the updates made <SPAN
+> answer, as it allows the DBA to head over
+to the database node that is broken, and manually fix things before
+cleaning out the defective event and restarting
+<B
+CLASS="APPLICATION"
+>slon</B
+>.  You can be certain that the updates
+made <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >after</I
 ></SPAN
-> the DDL change on the
-provider node are queued up, waiting to head to the subscriber.  You
-don't run the risk of there being updates made that depended on the
-DDL changes in order to be correct.&#13;</P
+> the DDL change on the provider node
+are queued up, waiting to head to the subscriber.  You don't run the
+risk of there being updates made that depended on the DDL changes in
+order to be correct.</P
 ></LI
 ></UL
 >&#13;</P
 ><P
 >Unfortunately, this nonetheless implies that the use of the DDL
-facility is somewhat fragile and dangerous.  Making DDL changes should
-not be done in a sloppy or cavalier manner.  If your applications do
-not have fairly stable SQL schemas, then using Slony-I for replication
-is likely to be fraught with trouble and frustration.&#13;</P
+facility is somewhat fragile and fairly dangerous.  Making DDL changes
+must not be done in a sloppy or cavalier manner.  If your applications
+do not have fairly stable SQL schemas, then using
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> for replication is likely to be
+fraught with trouble and frustration.</P
 ><P
->There is an article on how to manage Slony schema changes here:
+>There is an article on how to manage Slony-I schema changes
+here:
 <A
 HREF="http://www.varlena.com/varlena/GeneralBits/88.php"
 TARGET="_top"
 >Varlena General Bits</A
->
-
-
- </P
+></P
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
Index: cluster.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/cluster.html -Ldoc/adminguide/cluster.html -u -w -r1.2 -r1.3
--- doc/adminguide/cluster.html
+++ doc/adminguide/cluster.html
@@ -83,16 +83,22 @@
 >5. Defining Slony-I Clusters</A
 ></H1
 ><P
->A Slony-I cluster is the basic grouping of database instances in
-which replication takes place.  It consists of a set of PostgreSQL
-database instances in which is defined a namespace specific to that
-cluster.&#13;</P
+>A <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster is the basic grouping of
+database instances in which replication takes place.  It consists of a
+set of <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> database instances in which is defined
+a namespace specific to that cluster.&#13;</P
 ><P
 >Each database instance in which replication is to take place is
 identified by a node number.&#13;</P
 ><P
->For a simple install, it may be reasonable for the "master" to
-be node #1, and for the "slave" to be node #2.&#13;</P
+>For a simple install, it may be reasonable for the origin to be
+node #1, and for the subscriber to be node #2.&#13;</P
 ><P
 >Some planning should be done, in more complex cases, to ensure
 that the numbering system is kept sane, lest the administrators be
Index: firstdb.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/firstdb.html -Ldoc/adminguide/firstdb.html -u -w -r1.2 -r1.3
--- doc/adminguide/firstdb.html
+++ doc/adminguide/firstdb.html
@@ -82,14 +82,21 @@
 >12. Replicating Your First Database</A
 ></H1
 ><P
->In this example, we will be replicating a brand new pgbench database.  The
-mechanics of replicating an existing database are covered here, however we
-recommend that you learn how Slony-I functions by using a fresh new
-non-production database.&#13;</P
-><P
->The Slony-I replication engine is trigger-based, allowing us to
-replicate databases (or portions thereof) running under the same
-postmaster.&#13;</P
+>In this example, we will be replicating a brand new pgbench
+database.  The mechanics of replicating an existing database are
+covered here, however we recommend that you learn how
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> functions by using a fresh new
+non-production database.</P
+><P
+>The <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> replication engine is
+trigger-based, allowing us to replicate databases (or portions
+thereof) running under the same postmaster.</P
 ><P
 >This example will show how to replicate the pgbench database
 running on localhost (master) to the pgbench slave database also
@@ -104,27 +111,28 @@
 > You have <CODE
 CLASS="OPTION"
 >tcpip_socket=true</CODE
-> in your <TT
+> in your
+<TT
 CLASS="FILENAME"
 >postgresql.conf</TT
-> and
-	</P
+> and</P
 ></LI
 ><LI
 ><P
-> You have enabled access in your cluster(s) via <TT
+> You have enabled access in your cluster(s) via
+<TT
 CLASS="FILENAME"
 >pg_hba.conf</TT
 ></P
 ></LI
 ></UL
->&#13;</P
+></P
 ><P
 > The <CODE
 CLASS="ENVAR"
 >REPLICATIONUSER</CODE
-> needs to be a PostgreSQL superuser.  This is typically
-postgres or pgsql.&#13;</P
+> needs to be a PostgreSQL superuser.
+This is typically postgres or pgsql.</P
 ><P
 >You should also set the following shell variables:
 
@@ -259,7 +267,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN953"
+NAME="AEN1161"
 >12.1. Creating the pgbenchuser</A
 ></H2
 ><P
@@ -273,7 +281,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN957"
+NAME="AEN1165"
 >12.2. Preparing the databases</A
 ></H2
 ><TABLE
@@ -291,7 +299,10 @@
 ></TR
 ></TABLE
 ><P
->Because Slony-I depends on the databases having the pl/pgSQL procedural
+>Because <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> depends on the databases having the pl/pgSQL procedural
 language installed, we better install it now.  It is possible that you have
 installed pl/pgSQL into the template1 database in which case you can skip this
 step because it's already installed into the $MASTERDBNAME.
@@ -310,7 +321,10 @@
 ></TABLE
 >&#13;</P
 ><P
->Slony-I does not yet automatically copy table definitions from a
+><SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> does not yet automatically copy table definitions from a
 master when a slave subscribes to it, so we need to import this data.
 We do this with <B
 CLASS="APPLICATION"
@@ -331,18 +345,21 @@
 ></TABLE
 >&#13;</P
 ><P
->To illustrate how Slony-I allows for on the fly replication
-subscription, let's start up <B
+>To illustrate how <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> allows for on the fly
+replication subscription, let's start up <B
 CLASS="APPLICATION"
 >pgbench</B
->.  If you run the
-<B
+>.  If
+you run the <B
 CLASS="APPLICATION"
 >pgbench</B
-> application in the foreground of a separate
-terminal window, you can stop and restart it with different parameters
-at any time.  You'll need to re-export the variables again so they are
-available in this session as well.&#13;</P
+> application in the foreground of a
+separate terminal window, you can stop and restart it with different
+parameters at any time.  You'll need to re-export the variables again
+so they are available in this session as well.&#13;</P
 ><P
 >The typical command to run <B
 CLASS="APPLICATION"
@@ -367,24 +384,32 @@
 CLASS="APPLICATION"
 >pgbench</B
 > with 5 concurrent clients
-each processing 1000 transactions against the pgbench database running
-on localhost as the pgbench user.&#13;</P
+each processing 1000 transactions against the <B
+CLASS="APPLICATION"
+>pgbench</B
+>
+database running on localhost as the pgbench user.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN973"
+NAME="AEN1185"
 >12.3. Configuring the Database for Replication.</A
 ></H2
 ><P
 >Creating the configuration tables, stored procedures, triggers
-and configuration is all done through the slonik tool.  It is a
-specialized scripting aid that mostly calls stored procedures in the
-master/slave (node) databases.  The script to create the initial
-configuration for the simple master-slave setup of our pgbench
-database looks like this:
+and configuration is all done through the <A
+HREF="app-slonik.html#SLONIK"
+><B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+> tool.  It is a specialized scripting aid
+that mostly calls stored procedures in the master/slave (node)
+databases.  The script to create the initial configuration for the
+simple master-slave setup of our pgbench database looks like this:
 
 <TABLE
 BORDER="0"
@@ -457,11 +482,19 @@
 ></TABLE
 >&#13;</P
 ><P
->Is the pgbench still running?  If not start it again.&#13;</P
+>Is the <B
+CLASS="APPLICATION"
+>pgbench</B
+> still running?  If not start it
+again.&#13;</P
 ><P
 >At this point we have 2 databases that are fully prepared.  One
-is the master database in which bgbench is busy accessing and changing
-rows.  It's now time to start the replication daemons.&#13;</P
+is the master database in which <B
+CLASS="APPLICATION"
+>pgbench</B
+> is busy
+accessing and changing rows.  It's now time to start the replication
+daemons.&#13;</P
 ><P
 >On $MASTERHOST the command to start the replication engine is
 
@@ -497,15 +530,22 @@
 ><P
 >Even though we have the <B
 CLASS="APPLICATION"
->slon</B
-> running on both the
-master and slave, and they are both spitting out diagnostics and other
-messages, we aren't replicating any data yet.  The notices you are
-seeing is the synchronization of cluster configurations between the 2
+><A
+HREF="app-slon.html#SLON"
+> slon</A
+></B
+> running on both the master and slave, and they
+are both spitting out diagnostics and other messages, we aren't
+replicating any data yet.  The notices you are seeing is the
+synchronization of cluster configurations between the 2
 <B
 CLASS="APPLICATION"
->slon</B
-> processes.&#13;</P
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+>
+processes.&#13;</P
 ><P
 >To start replicating the 4 pgbench tables (set 1) from the
 master (node id 1) the the slave (node id 2), execute the following
@@ -558,17 +598,20 @@
 performance of the two systems involved, the sizing of the two
 databases, the actual transaction load and how well the two databases
 are tuned and maintained, this catchup process can be a matter of
-minutes, hours, or eons.&#13;</P
+minutes, hours, or eons.</P
 ><P
 >You have now successfully set up your first basic master/slave
 replication system, and the 2 databases should, once the slave has
 caught up, contain identical data.  That's the theory, at least.  In
 practice, it's good to build confidence by verifying that the datasets
-are in fact the same.&#13;</P
+are in fact the same.</P
 ><P
->The following script will create ordered dumps of the 2 databases and compare
-them.  Make sure that pgbench has completed it's testing, and that your slon
-sessions have caught up.
+>The following script will create ordered dumps of the 2
+databases and compare them.  Make sure that <B
+CLASS="APPLICATION"
+>pgbench</B
+> has
+completed its testing, and that your slon sessions have caught up.
 
 <TABLE
 BORDER="0"
@@ -613,23 +656,28 @@
 ></TD
 ></TR
 ></TABLE
->&#13;</P
+></P
 ><P
 >Note that there is somewhat more sophisticated documentation of
-the process in the Slony-I source code tree in a file called
-slony-I-basic-mstr-slv.txt.&#13;</P
+the process in the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> source code tree
+in a file called
+<TT
+CLASS="FILENAME"
+>slony-I-basic-mstr-slv.txt</TT
+>.</P
 ><P
->If this script returns "FAILED" please contact the developers at
-<A
-HREF="http://slony.org/"
+>If this script returns <TT
+CLASS="COMMAND"
+>FAILED</TT
+> please contact the
+developers at <A
+HREF="http://slony.info/"
 TARGET="_top"
-> http://slony.org/</A
->
-
-
-
-
- </P
+>http://slony.info/</A
+></P
 ></DIV
 ></DIV
 ><DIV
Index: listenpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/listenpaths.sgml -Ldoc/adminguide/listenpaths.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/listenpaths.sgml
+++ doc/adminguide/listenpaths.sgml
@@ -1,58 +1,66 @@
 <sect1 id="listenpaths"> <title/ Slony Listen Paths/
 
+<note> <para> If you are running version <productname>Slony-I</productname> 1.1, it
+should be <emphasis>completely unnecessary</emphasis> to read this section as it
+introduces a way to automatically manage this part of its
+configuration.  For earlier versions, however, it is needful...</para>
+</note>
+
 <para>If you have more than two or three nodes, and any degree of
-usage of cascaded subscribers (_e.g._ - subscribers that are
+usage of cascaded subscribers (<emphasis/e.g./ - subscribers that are
 subscribing through a subscriber node), you will have to be fairly
-careful about the configuration of "listen paths" via the Slonik STORE
-LISTEN and DROP LISTEN statements that control the contents of the
+careful about the configuration of <quote/listen paths/ via the Slonik <command/STORE
+LISTEN/ and <command/DROP LISTEN/ statements that control the contents of the
 table sl_listen.
 
-<para>The "listener" entries in this table control where each node
-expects to listen in order to get events propagated from other nodes.
-You might think that nodes only need to listen to the "parent" from
-whom they are getting updates, but in reality, they need to be able to
-receive messages from _all_ nodes in order to be able to conclude that
-SYNCs have been received everywhere, and that, therefore, entries in
-sl_log_1 and sl_log_2 have been applied everywhere, and can therefore
-be purged.
+<para>The <quote/listener/ entries in this table control where each
+node expects to listen in order to get events propagated from other
+nodes.  You might think that nodes only need to listen to the
+<quote/parent/ from whom they are getting updates, but in reality,
+they need to be able to receive messages from <emphasis/all/ nodes in
+order to be able to conclude that SYNCs have been received everywhere,
+and that, therefore, entries in sl_log_1 and sl_log_2 have been
+applied everywhere, and can therefore be purged.  This extra
+communication is needful so <productname/Slony-I/ is able to shift
+origins to other locations.
 
 <sect2><title/ How Listening Can Break/
 
 <para>On one occasion, I had a need to drop a subscriber node (#2) and
 recreate it.  That node was the data provider for another subscriber
-(#3) that was, in effect, a "cascaded slave."  Dropping the subscriber
-node initially didn't work, as slonik informed me that there was a
-dependant node.  I repointed the dependant node to the "master" node
-for the subscription set, which, for a while, replicated without
-difficulties.
-
-<para>I then dropped the subscription on "node 2," and started
-resubscribing it.  That raised the Slony-I <command/SET_SUBSCRIPTION/
-event, which started copying tables.  At that point in time, events
-stopped propagating to "node 3," and while it was in perfectly OK
-shape, no events were making it to it.
+(#3) that was, in effect, a <quote/cascaded slave./ Dropping the
+subscriber node initially didn't work, as <link linkend="slonik">
+<command/slonik/ </link> informed me that there was a dependant node.
+I repointed the dependant node to the <quote/master/ node for the
+subscription set, which, for a while, replicated without difficulties.
+
+<para>I then dropped the subscription on <quote/node 2,/ and started
+resubscribing it.  That raised the <productname/Slony-I/
+<command/SET_SUBSCRIPTION/ event, which started copying tables.  At
+that point in time, events stopped propagating to <quote/node 3,/ and
+while it was in perfectly OK shape, no events were making it to it.
 
 <para>The problem was that node #3 was expecting to receive events
-from node #2, which was busy processing the <command/SET_SUBSCRIPTION/ event,
-and was not passing anything else on.
+from node #2, which was busy processing the <command/SET_SUBSCRIPTION/
+event, and was not passing anything else on.
 
 <para>We dropped the listener rules that caused node #3 to listen to
 node 2, replacing them with rules where it expected its events to come
-from node #1 (the "master" provider node for the replication set).  At
-that moment, "as if by magic," node #3 started replicating again, as
+from node #1 (the origin node for the replication set).  At that
+moment, <quote/as if by magic,/ node #3 started replicating again, as
 it discovered a place to get <command/SYNC/ events.
 
 <sect2><title/How The Listen Configuration Should Look/
 
-<para>The simple cases tend to be simple to cope with.  We'll look at
-a fairly complex set of nodes.
+<para>The simple cases tend to be simple to cope with.  We need to
+instead look at a more complex node configuration.
 
-<para>Consider a set of nodes, 1 thru 6, where 1 is the "master,"
-where 2-4 subscribe directly to the master, and where 5 subscribes to
+<para>Consider a set of nodes, 1 thru 6, where 1 is the origin, 
+where 2-4 subscribe directly to the origin, and where 5 subscribes to
 2, and 6 subscribes to 5.
 
-<para>Here is a "listener network" that indicates where each node
-should listen for messages coming from each other node:
+<para>Here is a <quote/listener network/ that indicates where each
+node should listen for messages coming from each other node:
 
 <screen>
        1|   2|   3|   4|   5|   6|
@@ -110,86 +118,74 @@
 
 <para>How we read these listen statements is thus...
 
-<para>When on the "receiver" node, look to the "provider" node to
-provide events coming from the "origin" node.
+<para>When on the <quote/receiver/ node, look to the <quote/provider/
+node to provide events coming from the <quote/origin/ node.
 
 <para>The tool <filename/init_cluster.pl/ in the <filename/altperl/
 scripts produces optimized listener networks in both the tabular form
-shown above as well as in the form of Slonik statements.
+shown above as well as in the form of <link linkend="Slonik">
+<application/slonik/ </link> statements.
+
+<para>There are three <quote/thorns/ in this set of roses:
 
-<para>There are three "thorns" in this set of roses:
 <itemizedlist>
 
 <listitem><para> If you change the shape of the node set, so that the
 nodes subscribe differently to things, you need to drop sl_listen
 entries and create new ones to indicate the new preferred paths
-between nodes.  There is no automated way at this point to do this
-"reshaping."
+between nodes.  Until <productname/Slony-I/, there is no automated way
+at this point to do this <quote/reshaping./
 
 <listitem><para> If you <emphasis/don't/ change the sl_listen entries,
 events will likely continue to propagate so long as all of the nodes
 continue to run well.  The problem will only be noticed when a node is
-taken down, "orphaning" any nodes that are listening through it.
+taken down, <quote/orphaning/ any nodes that are listening through it.
 
 <listitem><para> You might have multiple replication sets that have
-<emphasis/different/ shapes for their respective trees of subscribers.  There
-won't be a single "best" listener configuration in that case.
+<emphasis/different/ shapes for their respective trees of subscribers.
+There won't be a single <quote/best/ listener configuration in that
+case.
 
 <listitem><para> In order for there to be an sl_listen path, there
 <emphasis/must/ be a series of sl_path entries connecting the origin
 to the receiver.  This means that if the contents of sl_path do not
-express a "connected" network of nodes, then some nodes will not be
-reachable.  This would typically happen, in practice, when you have
+express a <quote/connected/ network of nodes, then some nodes will not
+be reachable.  This would typically happen, in practice, when you have
 two sets of nodes, one in one subnet, and another in another subnet,
-where there are only a couple of "firewall" nodes that can talk
+where there are only a couple of <quote/firewall/ nodes that can talk
 between the subnets.  Cut out those nodes and the subnets stop
 communicating.
 
 </itemizedlist>
 
-<sect2><title/Open Question/
-
-<para>I am not certain what happens if you have multiple listen path
-entries for one path, that is, if you set up entries allowing a node
-to listen to multiple receivers to get events from a particular
-origin.  Further commentary on that would be appreciated!
-
-<note><para> Actually, I do have answers to this; the remainder of
-this document should be re-presented based on the fact that Slony-I
-1.1 will include a "heuristic" to generate the listener paths
-automatically. </note>
-
-<sect2><title/ Generating listener entries via heuristics/
-
-<para>It ought to be possible to generate sl_listen entries
-dynamically, based on the following heuristics.  Hopefully this will
-take place in version 1.1, eliminating the need to configure this by
-hand.
+<sect2><title/Automated Listen Path Generation/
 
-<para>Configuration will (tentatively) be controlled based on two data
-sources:
+<para> In <productname/Slony-I/ version 1.1, a heuristic scheme is
+introduced to automatically generate listener entries.  This happens,
+in order, based on three data sources:
 
 <itemizedlist>
 
 <listitem><para> sl_subscribe entries are the first, most vital
-control as to what listens to what; we know there must be a "listen"
-entry for a subscriber node to listen to its provider for events from
-the provider, and there should be direct "listening" taking place
-between subscriber and provider.
+control as to what listens to what; we <emphasis/know/ there must be a
+direct path between each subscriber node and its provider.
 
 <listitem><para> sl_path entries are the second indicator; if
-sl_subscribe has not already indicated "how to listen," then a node
-may listen directly to the event's origin if there is a suitable
-sl_path entry
-
-<listitem><para> If there is no guidance thus far based on the above
-data sources, then nodes can listen indirectly if there is an sl_path
-entry that points to a suitable sl_listen entry...
+sl_subscribe has not already indicated <quote/how to listen,/ then a
+node may listen directly to the event's origin if there is a suitable
+sl_path entry.
+
+<listitem><para> Lastly, if there has been no guidance thus far based
+on the above data sources, then nodes can listen indirectly via every
+node that is either a provider for the receiver, or that is using the
+receiver as a provider.
 
 </itemizedlist>
 
-<para> A stored procedure would run on each node, rewriting sl_listen
-each time sl_subscribe or sl_path are modified.
+<para> Any time sl_subscribe or sl_path are modified,
+<function>RebuildListenEntries()</function> will be called to revise
+the listener paths.</para>
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
--- /dev/null
+++ doc/adminguide/slony-commands.html
@@ -0,0 +1,184 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Slony-I Commands</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="PREVIOUS"
+TITLE="Defining Slony-I Replication
+Sets"
+HREF="definingsets.html"><LINK
+REL="NEXT"
+TITLE="slon"
+HREF="app-slon.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="REFERENCE"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="definingsets.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="app-slon.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="REFERENCE"
+><A
+NAME="SLONY-COMMANDS"
+></A
+><DIV
+CLASS="TITLEPAGE"
+><H1
+CLASS="TITLE"
+>I. Slony-I Commands</H1
+><DIV
+CLASS="TOC"
+><DL
+><DT
+><B
+>Table of Contents</B
+></DT
+><DT
+><A
+HREF="app-slon.html"
+><B
+CLASS="APPLICATION"
+>slon</B
+></A
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> daemon
+    </DT
+><DT
+><A
+HREF="app-slonik.html"
+><B
+CLASS="APPLICATION"
+>slonik</B
+></A
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> command processor
+    </DT
+></DL
+></DIV
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="definingsets.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="app-slon.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Defining Slony-I Replication
+Sets</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+>&nbsp;</TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><B
+CLASS="APPLICATION"
+>slon</B
+></TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: slon.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slon.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slon.sgml -Ldoc/adminguide/slon.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/slon.sgml
+++ doc/adminguide/slon.sgml
@@ -1,4 +1,4 @@
-<refentry id="slon">
+<refentry id="app-slon">
 <refmeta>
     <refentrytitle id="app-slon-title"><application>slon</application></refentrytitle>
     <manvolnum>1</manvolnum>
@@ -6,13 +6,13 @@
   </refmeta>
 
   <refnamediv>
-    <refname><application>slon</application></refname>
+    <refname><application id="slon">slon</application></refname>
     <refpurpose>
       <productname>Slony-I</productname> daemon
     </refpurpose>
   </refnamediv>
 
- <indexterm zone="slon">
+ <indexterm zone="app-slon">
   <primary>slon</primary>
  </indexterm>
 
Index: maintenance.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/maintenance.html -Ldoc/adminguide/maintenance.html -u -w -r1.2 -r1.3
--- doc/adminguide/maintenance.html
+++ doc/adminguide/maintenance.html
@@ -82,37 +82,60 @@
 >5. Slony-I Maintenance</A
 ></H1
 ><P
->Slony-I actually does most of its necessary maintenance itself, in a "cleanup" thread:
+><SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> actually does most of its necessary
+maintenance itself, in a <SPAN
+CLASS="QUOTE"
+>"cleanup"</SPAN
+> thread:
 
 <P
 ></P
 ><UL
 ><LI
 ><P
-> Deletes old data from various tables in the Slony-I
-cluster's namespace, notably entries in sl_log_1, sl_log_2 (not yet
-used), and sl_seqlog.&#13;</P
+> Deletes old data from various tables in the
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster's namespace, notably entries in
+sl_log_1, sl_log_2 (not yet used), and sl_seqlog.&#13;</P
 ></LI
 ><LI
 ><P
-> Vacuum certain tables used by Slony-I.  As of 1.0.5,
-this includes pg_listener; in earlier versions, you must vacuum that
-table heavily, otherwise you'll find replication slowing down because
-Slony-I raises plenty of events, which leads to that table having
-plenty of dead tuples.&#13;</P
+> Vacuum certain tables used by <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>.
+As of 1.0.5, this includes pg_listener; in earlier versions, you must
+vacuum that table heavily, otherwise you'll find replication slowing
+down because <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> raises plenty of events, which
+leads to that table having plenty of dead tuples.&#13;</P
 ><P
 > In some versions (1.1, for sure; possibly 1.0.5) there is the
 option of not bothering to vacuum any of these tables if you are using
-something like pg_autovacuum to handle vacuuming of these tables.
-Unfortunately, it has been quite possible for pg_autovacuum to not
-vacuum quite frequently enough, so you probably want to use the
-internal vacuums.  Vacuuming pg_listener "too often" isn't nearly as
-hazardous as not vacuuming it frequently enough.&#13;</P
+something like <B
+CLASS="APPLICATION"
+>pg_autovacuum</B
+> to handle vacuuming of
+these tables.  Unfortunately, it has been quite possible for
+<B
+CLASS="APPLICATION"
+>pg_autovacuum</B
+> to not vacuum quite frequently enough, so
+you probably want to use the internal vacuums.  Vacuuming pg_listener
+"too often" isn't nearly as hazardous as not vacuuming it frequently
+enough.&#13;</P
 ><P
 >Unfortunately, if you have long-running transactions, vacuums
 cannot clear out dead tuples that are newer than the eldest
 transaction that is still running.  This will most notably lead to
-pg_listener growing large and will slow replication.&#13;</P
+pg_listener growing large and will slow replication.</P
 ></LI
 ></UL
 >&#13;</P
@@ -121,14 +144,23 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN665"
+NAME="AEN748"
 >5.1. Watchdogs: Keeping Slons Running</A
 ></H2
 ><P
->There are a couple of "watchdog" scripts available that monitor
-things, and restart the slon processes should they happen to die for
-some reason, such as a network "glitch" that causes loss of
-connectivity.&#13;</P
+>There are a couple of <SPAN
+CLASS="QUOTE"
+>"watchdog"</SPAN
+> scripts available that
+monitor things, and restart the <B
+CLASS="APPLICATION"
+>slon</B
+> processes should
+they happen to die for some reason, such as a network <SPAN
+CLASS="QUOTE"
+>"glitch"</SPAN
+>
+that causes loss of connectivity.&#13;</P
 ><P
 >You might want to run them...&#13;</P
 ></DIV
@@ -137,27 +169,50 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN669"
+NAME="AEN755"
 >5.2. Alternative to Watchdog: generate_syncs.sh</A
 ></H2
 ><P
->A new script for Slony-I 1.1 is "generate_syncs.sh", which
-addresses the following kind of situation.&#13;</P
+>A new script for <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> 1.1 is
+<B
+CLASS="APPLICATION"
+>generate_syncs.sh</B
+>, which addresses the following kind of
+situation.&#13;</P
+><P
+>Supposing you have some possibly-flakey server where the <B
+CLASS="APPLICATION"
+>slon</B
+>
+daemon that might not run all the time, you might return from a
+weekend away only to discover the following situation...&#13;</P
 ><P
->Supposing you have some possibly-flakey slon daemon that might
-not run all the time, you might return from a weekend away only to
-discover the following situation...&#13;</P
-><P
->On Friday night, something went "bump" and while the database
-came back up, none of the slon daemons survived.  Your online
-application then saw nearly three days worth of heavy transactions.&#13;</P
+>On Friday night, something went <SPAN
+CLASS="QUOTE"
+>"bump"</SPAN
+> and while the
+database came back up, none of the <B
+CLASS="APPLICATION"
+>slon</B
+> daemons
+survived.  Your online application then saw nearly three days worth of
+reasonably heavy transaction load.&#13;</P
 ><P
 >When you restart slon on Monday, it hasn't done a SYNC on the
-master since Friday, so that the next "SYNC set" comprises all of the
-updates between Friday and Monday.  Yuck.&#13;</P
-><P
->If you run generate_syncs.sh as a cron job every 20 minutes, it
-will force in a periodic SYNC on the "master" server, which means that
+master since Friday, so that the next <SPAN
+CLASS="QUOTE"
+>"SYNC set"</SPAN
+> comprises all
+of the updates between Friday and Monday.  Yuck.&#13;</P
+><P
+>If you run <B
+CLASS="APPLICATION"
+>generate_syncs.sh</B
+> as a cron job every 20 minutes, it
+will force in a periodic SYNC on the origin, which means that
 between Friday and Monday, the numerous updates are split into more
 than 100 syncs, which can be applied incrementally, making the cleanup
 a lot less unpleasant.&#13;</P
@@ -168,40 +223,49 @@
 CLASS="EMPHASIS"
 >are</I
 ></SPAN
-> running regularly, this script
-won't bother doing anything.&#13;</P
+> running regularly,
+this script won't bother doing anything.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN678"
+NAME="AEN771"
 >5.3. Log Files</A
 ></H2
 ><P
->Slon daemons generate some more-or-less verbose log files,
-depending on what debugging level is turned on.  You might assortedly
-wish to:
+><A
+HREF="app-slon.html#SLON"
+> <B
+CLASS="APPLICATION"
+>slon</B
+> </A
+> daemons
+generate some more-or-less verbose log files, depending on what
+debugging level is turned on.  You might assortedly wish to:
 
 <P
 ></P
 ><UL
 ><LI
 ><P
-> Use a log rotator like Apache rotatelogs to have a
-sequence of log files so that no one of them gets too big;&#13;</P
+> Use a log rotator like <SPAN
+CLASS="PRODUCTNAME"
+>Apache</SPAN
+>
+<B
+CLASS="APPLICATION"
+>rotatelogs</B
+> to have a sequence of log files so that no
+one of them gets too big;&#13;</P
 ></LI
 ><LI
 ><P
-> Purge out old log files, periodically.&#13;</P
+> Purge out old log files, periodically.</P
 ></LI
 ></UL
->
-
-
-
- </P
+></P
 ></DIV
 ></DIV
 ><DIV
Index: cluster.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/cluster.sgml -Ldoc/adminguide/cluster.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/cluster.sgml
+++ doc/adminguide/cluster.sgml
@@ -1,15 +1,15 @@
 <sect1 id="cluster"> <title>Defining Slony-I Clusters</title>
 
-<para>A Slony-I cluster is the basic grouping of database instances in
-which replication takes place.  It consists of a set of PostgreSQL
-database instances in which is defined a namespace specific to that
-cluster.
+<para>A <productname/Slony-I/ cluster is the basic grouping of
+database instances in which replication takes place.  It consists of a
+set of <productname/PostgreSQL/ database instances in which is defined
+a namespace specific to that cluster.
 
 <para>Each database instance in which replication is to take place is
 identified by a node number.
 
-<para>For a simple install, it may be reasonable for the "master" to
-be node #1, and for the "slave" to be node #2.
+<para>For a simple install, it may be reasonable for the origin to be
+node #1, and for the subscriber to be node #2.
 
 <para>Some planning should be done, in more complex cases, to ensure
 that the numbering system is kept sane, lest the administrators be
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.4
retrieving revision 1.5
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.4 -r1.5
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -1,36 +1,37 @@
-<sect1 id="help"> <title/ More Slony-I Help /
-<para>If you are having problems with Slony-I, you have several options for help:
+<sect1 id="help"> <title> More Slony-I Help </title>
+
+<para>If you are having problems with Slony-I, you have several
+options for help:
 
 <itemizedlist>
 
-<listitem><Para> <ulink
+<listitem><para> <ulink
 url="http://slony.info/">http://slony.info/</ulink> - the official
-"home" of Slony
+"home" of Slony</para></listitem>
 
-<listitem><Para> Documentation on the Slony-I Site- Check the
+<listitem><para> Documentation on the Slony-I Site- Check the
 documentation on the Slony website: <ulink
 url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto
-</ulink>
+</ulink></para></listitem>
 
-<listitem><Para> Other Documentation - There are several articles here
-<ulink url=
-"http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication">
-Varlena GeneralBits </ulink> that may be helpful.
+<listitem><para> Other Documentation - There are several articles here
+<ulink url="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication">
+Varlena GeneralBits </ulink> that may be helpful.</para></listitem>
 
-<listitem><Para> IRC - There are usually some people on #slony on
+<listitem><para> IRC - There are usually some people on #slony on
 irc.freenode.net who may be able to answer some of your
 questions. There is also a bot named "rtfm_please" that you may want
-to chat with.
+to chat with.</para></listitem>
 
-<listitem><Para> Mailing lists - The answer to your problem may exist
+<listitem><para> Mailing lists - The answer to your problem may exist
 in the Slony1-general mailing list archives, or you may choose to ask
 your question on the Slony1-general mailing list. The mailing list
 archives, and instructions for joining the list may be found <ulink
-url="http://gborg.postgresql.org/mailman/listinfo/slony1">here. </ulink>
+url="http://gborg.postgresql.org/mailman/listinfo/slony1">here. </ulink></para></listitem>
 
-<listitem><Para> If your Russian is much better than your English,
+<listitem><para> If your Russian is much better than your English,
 then <ulink url="http://kirov.lug.ru/wiki/Slony">
-KirovOpenSourceCommunity: Slony</ulink> may be the place to go
+KirovOpenSourceCommunity: Slony</ulink> may be the place to go</para></listitem>
 </itemizedlist>
 
 <sect2><title> Other Information Sources</title>
@@ -45,6 +46,7 @@
 </itemizedlist>
 
 </sect2>
+</sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
Index: monitoring.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/monitoring.html -Ldoc/adminguide/monitoring.html -u -w -r1.2 -r1.3
--- doc/adminguide/monitoring.html
+++ doc/adminguide/monitoring.html
@@ -82,19 +82,25 @@
 >4. Monitoring</A
 ></H1
 ><P
->Here are some of things that you may find in your Slony logs, and explanations of what they mean. &#13;</P
+>Here are some of things that you may find in your
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> logs, and explanations of what they mean.&#13;</P
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN645"
+NAME="AEN721"
 >4.1. CONFIG notices</A
 ></H2
 ><P
->These entries are pretty straightforward. They are informative messages about your configuration. &#13;</P
+>These entries are pretty straightforward. They are informative
+messages about your configuration.&#13;</P
 ><P
->Here are some typical entries that you will probably run into in your logs:
+>Here are some typical entries that you will probably run into in
+your logs:
 
 <TABLE
 BORDER="0"
@@ -121,11 +127,13 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN650"
+NAME="AEN726"
 >4.2. DEBUG Notices</A
 ></H2
 ><P
->Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads:
+>Debug notices are always prefaced by the name of the thread that
+the notice originates from. You will see messages from the following
+threads:
 
 <TABLE
 BORDER="0"
@@ -148,11 +156,7 @@
 > WriteMe: I can't decide the format for the rest of this. I
 think maybe there should be a "how it works" page, explaining more
 about how the threads work, what to expect in the logs after you run a
-slonik command...
-
-
-
- </P
+slonik command...&#13;</P
 ></DIV
 ></DIV
 ><DIV
Index: faq.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/faq.html -Ldoc/adminguide/faq.html -u -w -r1.2 -r1.3
--- doc/adminguide/faq.html
+++ doc/adminguide/faq.html
@@ -71,7 +71,7 @@
 ><H1
 CLASS="TITLE"
 ><A
-NAME="AEN1025"
+NAME="AEN1247"
 >Slony-I FAQ</A
 ></H1
 ><H3
@@ -80,7 +80,7 @@
 ><H3
 CLASS="AUTHOR"
 ><A
-NAME="AEN1028"
+NAME="AEN1250"
 >Christopher  Browne</A
 ></H3
 ><HR></DIV
@@ -102,7 +102,7 @@
 ><DL
 ><DT
 >Q: <A
-HREF="faq.html#AEN1036"
+HREF="faq.html#AEN1258"
 >I looked for the <CODE
 CLASS="ENVAR"
 >_clustername</CODE
@@ -111,7 +111,7 @@
 ></DT
 ><DT
 >Q: <A
-HREF="faq.html#AEN1047"
+HREF="faq.html#AEN1270"
 >Some events moving around, but no replication&#13;</A
 ></DT
 ></DL
@@ -122,7 +122,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1036"
+NAME="AEN1258"
 ></A
 ><B
 >Q: I looked for the <CODE
@@ -138,23 +138,26 @@
 ><P
 ><B
 >A: </B
-> If the DSNs are wrong, then slon instances can't connect to the nodes.&#13;</P
+> If the DSNs are wrong, then slon instances can't
+connect to the nodes.&#13;</P
 ><P
 >This will generally lead to nodes remaining entirely untouched.&#13;</P
 ><P
->Recheck the connection configuration.  By the way, since
-<B
+>Recheck the connection configuration.  By the way, since <A
+HREF="app-slon.html#SLON"
+> <B
 CLASS="APPLICATION"
 >slon</B
-> links to libpq, you could have password information
-stored in <TT
+> </A
+> links to libpq, you could
+have password information stored in <TT
 CLASS="FILENAME"
 > <CODE
 CLASS="ENVAR"
 >$HOME</CODE
 >/.pgpass</TT
->,
-partially filling in right/wrong authentication information there.</P
+>, partially filling in
+right/wrong authentication information there.</P
 ></DIV
 ></DIV
 ><DIV
@@ -164,7 +167,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1047"
+NAME="AEN1270"
 ></A
 ><B
 >Q: Some events moving around, but no replication&#13;</B
@@ -193,16 +196,25 @@
 ><P
 ><B
 >A: </B
->On AIX and Solaris (and possibly elsewhere), both Slony-I <SPAN
+>On AIX and Solaris (and possibly elsewhere), both <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
->and PostgreSQL</I
+>and <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+></I
 ></SPAN
 > must be compiled with the <CODE
 CLASS="OPTION"
 >--enable-thread-safety</CODE
-> option.  The above results when PostgreSQL isn't so compiled.&#13;</P
+> option.  The above results when <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> isn't so compiled.&#13;</P
 ><P
 >What breaks here is that the libc (threadsafe) and libpq (non-threadsafe) use different memory locations for errno, thereby leading to the request failing.&#13;</P
 ><P
@@ -228,7 +240,10 @@
 CLASS="ENVAR"
 >LD_LIBRARY_PATH</CODE
 > had been set, on Solaris, to point to
-libraries from an old PostgreSQL compile.  That meant that even though
+libraries from an old <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> compile.  That meant that even though
 the database <SPAN
 CLASS="emphasis"
 ><I
@@ -266,7 +281,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1070"
+NAME="AEN1297"
 ></A
 ><B
 >Q: I tried creating a CLUSTER NAME with a "-" in it.
@@ -279,7 +294,13 @@
 ><P
 ><B
 >A: </B
-> Slony-I uses the same rules for unquoted identifiers as the PostgreSQL
+> <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> uses the same rules for unquoted identifiers as the <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+>
 main parser, so no, you probably shouldn't put a "-" in your
 identifier name.&#13;</P
 ><P
@@ -293,7 +314,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1076"
+NAME="AEN1305"
 ></A
 ><B
 >Q:  slon does not restart after crash&#13;</B
@@ -353,7 +374,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1087"
+NAME="AEN1316"
 ></A
 ><B
 >Q: ps finds passwords on command line&#13;</B
@@ -386,10 +407,13 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1096"
+NAME="AEN1325"
 ></A
 ><B
->Q: Slonik fails - cannot load PostgreSQL library - <TT
+>Q: Slonik fails - cannot load <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> library - <TT
 CLASS="COMMAND"
 >PGRES_FATAL_ERROR load '$libdir/xxid';</TT
 >&#13;</B
@@ -417,41 +441,69 @@
 library in the <CODE
 CLASS="ENVAR"
 >$libdir</CODE
-> directory that the PostgreSQL instance
-is using.  Note that the Slony-I components need to be installed in
-the PostgreSQL software installation for <SPAN
+> directory that the <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> instance
+is using.  Note that the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> components need to be installed in
+the <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> software installation for <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >each and every one</I
 ></SPAN
 >
-of the nodes, not just on the <SPAN
-CLASS="QUOTE"
->"master node."</SPAN
->&#13;</P
+of the nodes, not just on the origin node.&#13;</P
 ><P
 >This may also point to there being some other mismatch between
-the PostgreSQL binary instance and the Slony-I instance.  If you
-compiled Slony-I yourself, on a machine that may have multiple
-PostgreSQL builds <SPAN
+the <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> binary instance and the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> instance.  If you
+compiled <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> yourself, on a machine that may have multiple
+<SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> builds <SPAN
 CLASS="QUOTE"
 >"lying around,"</SPAN
 > it's possible that the slon or
 slonik binaries are asking to load something that isn't actually in
-the library directory for the PostgreSQL database cluster that it's
+the library directory for the <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> database cluster that it's
 hitting.&#13;</P
 ><P
 >Long and short: This points to a need to <SPAN
 CLASS="QUOTE"
 >"audit"</SPAN
 > what
-installations of PostgreSQL and Slony you have in place on the
-machine(s).  Unfortunately, just about any mismatch will cause things
-not to link up quite right.  See also <A
+installations of <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> and <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+you have in place on the machine(s).  Unfortunately, just about any
+mismatch will cause things not to link up quite right.  See also <A
 HREF="faq.html#SLONYFAQ02"
 >SlonyFAQ02 </A
-> concerning threading issues on Solaris ...&#13;</P
+> concerning threading issues
+on Solaris ...&#13;</P
 ><DIV
 CLASS="QANDAENTRY"
 ><DIV
@@ -459,7 +511,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1113"
+NAME="AEN1352"
 ></A
 ><B
 >Q: Table indexes with FQ namespace names
@@ -506,7 +558,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1121"
+NAME="AEN1360"
 ></A
 ><B
 >Q: I'm trying to get a slave subscribed, and get the following
@@ -545,7 +597,10 @@
 ><P
 ><B
 >A: </B
-> That doesn't work out: Slony-I won't work on the
+> That doesn't work out: <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> won't work on the
 <TT
 CLASS="COMMAND"
 >COPY</TT
@@ -566,7 +621,10 @@
 setting up the subscription.&#13;</P
 ><P
 >It could also be possible for there to be an old outstanding
-transaction blocking Slony-I from processing the sync.  You might want
+transaction blocking <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> from processing the sync.  You might want
 to take a look at pg_locks to see what's up:
 
 <TABLE
@@ -614,14 +672,20 @@
 setting up the first subscriber; it won't start on the second one
 until the first one has completed subscribing.&#13;</P
 ><P
->By the way, if there is more than one database on the PostgreSQL
+>By the way, if there is more than one database on the <SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+>
 cluster, and activity is taking place on the OTHER database, that will
 lead to there being <SPAN
 CLASS="QUOTE"
 >"transactions earlier than XID whatever"</SPAN
 > being
 found to be still in progress.  The fact that it's a separate database
-on the cluster is irrelevant; Slony-I will wait until those old
+on the cluster is irrelevant; <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> will wait until those old
 transactions terminate.</P
 ><DIV
 CLASS="QANDAENTRY"
@@ -630,7 +694,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1141"
+NAME="AEN1384"
 ></A
 ><B
 >Q: ERROR: duplicate key violates unique constraint "sl_table-pkey"&#13;</B
@@ -677,7 +741,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1149"
+NAME="AEN1392"
 ></A
 ><B
 >Q: I need to drop a table from a replication set</B
@@ -722,16 +786,28 @@
 ></TABLE
 >&#13;</P
 ><P
->The schema will obviously depend on how you defined the Slony-I
+>The schema will obviously depend on how you defined the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
 cluster.  The table ID, in this case, 40, will need to change to the
 ID of the table you want to have go away.
 
 You'll have to run these three queries on all of the nodes, preferably
-firstly on the "master" node, so that the dropping of this propagates
-properly.  Implementing this via a SLONIK statement with a new Slony
-event would do that.  Submitting the three queries using EXECUTE
-SCRIPT could do that.  Also possible would be to connect to each
-database and submit the queries by hand.</P
+firstly on the origin node, so that the dropping of this propagates
+properly.  Implementing this via a <A
+HREF="app-slonik.html#SLONIK"
+> slonik</A
+> statement with a new <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> event would do
+that.  Submitting the three queries using <TT
+CLASS="COMMAND"
+>EXECUTE SCRIPT</TT
+>
+could do that.  Also possible would be to connect to each database and
+submit the queries by hand.</P
 ></LI
 ></UL
 ></P
@@ -742,7 +818,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1163"
+NAME="AEN1410"
 ></A
 ><B
 >Q: I need to drop a sequence from a replication set&#13;</B
@@ -835,7 +911,10 @@
 >Similarly to <TT
 CLASS="COMMAND"
 >SET DROP TABLE</TT
->, this should be in place for Slony-I version
+>, this should be in place for <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> version
 1.0.5 as <TT
 CLASS="COMMAND"
 >SET DROP SEQUENCE.</TT
@@ -847,10 +926,13 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1186"
+NAME="AEN1434"
 ></A
 ><B
->Q: Slony-I: cannot add table to currently subscribed set 1&#13;</B
+>Q: <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>: cannot add table to currently subscribed set 1&#13;</B
 ></BIG
 ></P
 ><P
@@ -894,14 +976,17 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1195"
+NAME="AEN1444"
 ></A
 ><B
 >Q: Some nodes start consistently falling behind&#13;</B
 ></BIG
 ></P
 ><P
->I have been running Slony-I on a node for a while, and am seeing
+>I have been running <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> on a node for a while, and am seeing
 system performance suffering.&#13;</P
 ><P
 >I'm seeing long running queries of the form:
@@ -951,7 +1036,10 @@
 CLASS="FILENAME"
 >cleanup_thread.c</TT
 > contains a list of tables that are
-frequently vacuumed automatically.  In Slony-I 1.0.2,
+frequently vacuumed automatically.  In <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> 1.0.2,
 <CODE
 CLASS="ENVAR"
 >pg_listener</CODE
@@ -970,7 +1058,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1213"
+NAME="AEN1464"
 ></A
 ><B
 >Q: I started doing a backup using pg_dump, and suddenly Slony stops&#13;</B
@@ -994,11 +1082,17 @@
 >, which has taken out an <TT
 CLASS="COMMAND"
 >AccessShareLock</TT
-> on all of the tables in the database, including the Slony-I ones, and&#13;</P
+> on all of the tables in the database, including the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> ones, and&#13;</P
 ></LI
 ><LI
 ><P
-> A Slony-I sync event, which wants to grab a <TT
+> A <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> sync event, which wants to grab a <TT
 CLASS="COMMAND"
 >AccessExclusiveLock</TT
 > on	 the table <CODE
@@ -1108,7 +1202,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1251"
+NAME="AEN1504"
 ></A
 ><B
 >Q: The slons spent the weekend out of commission [for
@@ -1121,36 +1215,50 @@
 ><P
 ><B
 >A: </B
->You might want to take a look at the sl_log_1/sl_log_2 tables, and do
-a summary to see if there are any really enormous Slony-I transactions
-in there.  Up until at least 1.0.2, there needs to be a slon connected
-to the master in order for <TT
+> You might want to take a look at the sl_log_1/sl_log_2
+tables, and do a summary to see if there are any really enormous
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> transactions in there.  Up until at least 1.0.2,
+there needs to be a slon connected to the origin in order for
+<TT
 CLASS="COMMAND"
 >SYNC</TT
 > events to be generated.&#13;</P
 ><P
 >If none are being generated, then all of the updates until the next
-one is generated will collect into one rather enormous Slony-I
+one is generated will collect into one rather enormous <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
 transaction.&#13;</P
 ><P
->Conclusion: Even if there is not going to be a subscriber around, you
-<SPAN
+>Conclusion: Even if there is not going to be a subscriber
+around, you <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >really</I
 ></SPAN
-> want to have a slon running to service the <SPAN
-CLASS="QUOTE"
->"master"</SPAN
-> node.&#13;</P
+> want to have a slon running to service
+the origin node.&#13;</P
 ><P
->Some future version (probably 1.1) may provide a way for
-<TT
+><SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> 1.1 provides a stored procedure that
+allows <TT
 CLASS="COMMAND"
 >SYNC</TT
-> counts to be updated on the master by the stored
-function that is invoked by the table triggers.&#13;</P
+> counts to be updated on the origin based on a
+<B
+CLASS="APPLICATION"
+>cron</B
+> job even if there is no <A
+HREF="app-slon.html#SLON"
+> slon</A
+> daemon running.&#13;</P
 ><DIV
 CLASS="QANDAENTRY"
 ><DIV
@@ -1158,10 +1266,11 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1263"
+NAME="AEN1520"
 ></A
 ><B
->Q: I pointed a subscribing node to a different parent and it stopped replicating&#13;</B
+>Q: I pointed a subscribing node to a different provider
+and it stopped replicating&#13;</B
 ></BIG
 ></P
 ></DIV
@@ -1178,15 +1287,15 @@
 ><UL
 ><LI
 ><P
-> Node 1 - master</P
+> Node 1 - provider</P
 ></LI
 ><LI
 ><P
-> Node 2 - child of node 1 - the node we're reinitializing</P
+> Node 2 - subscriber to node 1 - the node we're reinitializing</P
 ></LI
 ><LI
 ><P
-> Node 3 - child of node 3 - node that should keep replicating</P
+> Node 3 - subscriber to node 3 - node that should keep replicating</P
 ></LI
 ></UL
 >&#13;</P
@@ -1281,7 +1390,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1294"
+NAME="AEN1551"
 ></A
 ><B
 >Q: After dropping a node, sl_log_1 isn't getting purged out anymore.&#13;</B
@@ -1295,7 +1404,10 @@
 >A: </B
 > This is a common scenario in versions before 1.0.5, as
 the "clean up" that takes place when purging the node does not include
-purging out old entries from the Slony-I table, sl_confirm, for the
+purging out old entries from the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> table, sl_confirm, for the
 recently departed node.&#13;</P
 ><P
 > The node is no longer around to update confirmations of what
@@ -1407,7 +1519,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1318"
+NAME="AEN1576"
 ></A
 ><B
 >Q: Replication Fails - Unique Constraint Violation&#13;</B
@@ -1450,7 +1562,10 @@
 ></TABLE
 >&#13;</P
 ><P
->The transaction rolls back, and Slony-I tries again, and again,
+>The transaction rolls back, and <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> tries again, and again,
 and again.  The problem is with one of the <SPAN
 CLASS="emphasis"
 ><I
@@ -1462,7 +1577,10 @@
 CLASS="COMMAND"
 >log_cmdtype = 'I'</TT
 >.  That isn't quite obvious; what takes
-place is that Slony-I groups 10 update queries together to diminish
+place is that <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> groups 10 update queries together to diminish
 the number of network round trips.&#13;</P
 ></DIV
 ><DIV
@@ -1501,7 +1619,10 @@
 ><LI
 ><P
 > The scenario seems to involve a delete transaction
-having been missed by Slony-I.&#13;</P
+having been missed by <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>.&#13;</P
 ></LI
 ></UL
 >&#13;</P
@@ -1513,7 +1634,10 @@
 >What is necessary, at this point, is to drop the replication set
 (or even the node), and restart replication from scratch on that node.&#13;</P
 ><P
->In Slony-I 1.0.5, the handling of purges of sl_log_1 are rather
+>In <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> 1.0.5, the handling of purges of sl_log_1 are rather
 more conservative, refusing to purge entries that haven't been
 successfully synced for at least 10 minutes on all nodes.  It is not
 certain that that will prevent the "glitch" from taking place, but it
@@ -1528,7 +1652,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1339"
+NAME="AEN1601"
 ></A
 ><B
 >Q:  If you have a slonik script something like this, it
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -1,18 +1,20 @@
 <sect1 id="concepts"> <title>Slony-I Concepts</title>
 
 
-<para>In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
+<para>In order to set up a set of Slony-I replicas, it is necessary to
+understand the following major abstractions that it uses:
 
 <itemizedlist>
 	<listitem><para> Cluster
 	<listitem><para> Node
 	<listitem><para> Replication Set
-	<listitem><para> Provider and Subscriber
+	<listitem><para> Origin, Providers and Subscribers
 </itemizedlist>
 
 <sect2><title>Cluster</title>
 
-<para>In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases.
+<para>In Slony-I terms, a Cluster is a named set of PostgreSQL
+database instances; replication takes place between those databases.
 
 <para>The cluster name is specified in each and every Slonik script via the directive:
 <programlisting>
@@ -32,7 +34,9 @@
  NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';
 </programlisting>
 
-<para>The CONNINFO information indicates a string argument that will ultimately be passed to the <function>PQconnectdb()</function> libpq function. 
+<para>The CONNINFO information indicates a string argument that will
+ultimately be passed to the <function>PQconnectdb()</function> libpq
+function.
 
 <para>Thus, a Slony-I cluster consists of:
 <itemizedlist>
@@ -48,22 +52,22 @@
 <para>You may have several sets, and the <quote/flow/ of replication does
 not need to be identical between those sets.
 
-<sect2><title> Provider and Subscriber</title>
+<sect2><title> Origin, Providers and Subscribers</title>
 
-<para>Each replication set has some <quote>master</quote> node, which
-winds up being the <emphasis>only</emphasis> place where user
-applications are permitted to modify data in the tables that are being
-replicated.  That <quote>master</quote> may be considered the
-originating <quote>provider node;</quote> it is the main place from
+<para>Each replication set has some origin node, which is the
+<emphasis>only</emphasis> place where user applications are permitted
+to modify data in the tables that are being replicated.  This might
+also be termed the <quote/master provider/; it is the main place from
 which data is provided.
 
 <para>Other nodes in the cluster will subscribe to the replication
 set, indicating that they want to receive the data.
 
-<para>The "master" node will never be considered a "subscriber."  But
-Slony-I supports the notion of cascaded subscriptions, that is, a node
-that is subscribed to the "master" may also behave as a "provider" to
-other nodes in the cluster.
+<para>The origin node will never be considered a <quote/subscriber./
+(Ignoring the case where the cluster is reshaped, and the origin is
+moved to another node.)  But Slony-I supports the notion of cascaded
+subscriptions, that is, a node that is subscribed to the origin may
+also behave as a <quote/provider/ to other nodes in the cluster.
 
 </sect1>
 
Index: failover.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/failover.html -Ldoc/adminguide/failover.html -u -w -r1.2 -r1.3
--- doc/adminguide/failover.html
+++ doc/adminguide/failover.html
@@ -86,47 +86,83 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN700"
+NAME="AEN810"
 >7.1. Foreword</A
 ></H2
 ><P
-> Slony-I is an asynchronous replication system.  Because of
-that, it is almost certain that at the moment the current origin of a
-set fails, the last transactions committed have not propagated to the
-subscribers yet.  They always fail under heavy load, and you know it.
-Thus the goal is to prevent the main server from failing.  The best
-way to do that is frequent maintenance.&#13;</P
-><P
-> Opening the case of a running server is not exactly what we all
-consider professional system maintenance.  And interestingly, those
-users who use replication for backup and failover purposes are usually
-the ones that have a very low tolerance for words like "downtime".  To
-meet these requirements, Slony-I has not only failover capabilities,
-but controlled master role transfer features too.&#13;</P
+> <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> is an asynchronous
+replication system.  Because of that, it is almost certain that at the
+moment the current origin of a set fails, the final transactions
+committed at the origin will have not yet propagated to the
+subscribers.  Systems are particularly likely to fail under heavy
+load; that is one of the corollaries of Murphy's Law.  Therefore the
+principal goal is to <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>prevent</I
+></SPAN
+> the main server from
+failing.  The best way to do that is frequent maintenance.</P
+><P
+> Opening the case of a running server is not exactly what we
+should consider a <SPAN
+CLASS="QUOTE"
+>"professional"</SPAN
+> way to do system
+maintenance.  And interestingly, those users who found it valuable to
+use replication for backup and failover purposes are the very ones
+that have the lowest tolerance for terms like <SPAN
+CLASS="QUOTE"
+>"system
+downtime."</SPAN
+> To help support these requirements, Slony-I has not
+only failover capabilities, but features for controlled origin
+transfer.</P
 ><P
 > It is assumed in this document that the reader is familiar with
-the slonik utility and knows at least how to set up a simple 2 node
-replication system with Slony-I.&#13;</P
+the <A
+HREF="app-slonik.html#SLONIK"
+> <B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+>
+utility and knows at least how to set up a simple 2 node replication
+system with <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>.</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN705"
->7.2. Switchover</A
+NAME="AEN822"
+>7.2. Controlled Switchover</A
 ></H2
 ><P
 > We assume a current <SPAN
 CLASS="QUOTE"
 >"origin"</SPAN
-> as node1 (AKA master) with
-one <SPAN
+> as node1 with one
+<SPAN
 CLASS="QUOTE"
 >"subscriber"</SPAN
-> as node2 (AKA slave).  A web application on a
-third server is accessing the database on node1.  Both databases are
-up and running and replication is more or less in sync.
+> as node2 (<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> -
+slave).  A web application on a third server is accessing the database
+on node1.  Both databases are up and running and replication is more
+or less in sync.
 
 <P
 ></P
@@ -140,7 +176,7 @@
 CLASS="APPLICATION"
 >pg_pool</B
 > for the applications database
-connections merely have to shut down the pool.&#13;</P
+connections merely have to shut down the pool.</P
 ></LI
 ><LI
 ><P
@@ -161,60 +197,123 @@
 ></TD
 ></TR
 ></TABLE
->&#13;</P
+></P
 ><P
-> After these commands, the origin (master role) of data set 1 is
-now on node2.  It is not simply transferred.  It is done in a fashion
-so that node1 is now a fully synchronized subscriber actively
-replicating the set.  So the two nodes completely switched roles.&#13;</P
+> After these commands, the origin (master role) of data set 1
+has been transferred to node2.  And it is not simply transferred; it
+is done in a fashion such that node1 becomes a fully synchronized
+subscriber, actively replicating the set.  So the two nodes have
+switched roles completely.</P
 ></LI
 ><LI
 ><P
-> After reconfiguring the web application (or pgpool)
-to connect to the database on node2 instead, the web server is
-restarted and resumes normal operation.&#13;</P
-><P
-> Done in one shell script, that does the shutdown, slonik, move
-config files and startup all together, this entire procedure takes
-less than 10 seconds.&#13;</P
+> After reconfiguring the web application (or
+<B
+CLASS="APPLICATION"
+>pgpool</B
+>) to connect to the database on node2, the web
+server is restarted and resumes normal operation.</P
+><P
+> Done in one shell script, that does the application shutdown,
+<B
+CLASS="APPLICATION"
+>slonik</B
+>, move config files and startup all together, this
+entire procedure is likely to take less than 10 seconds.</P
 ></LI
 ></UL
->&#13;</P
+></P
 ><P
-> It is now possible to simply shutdown node1 and do whatever is
-required.  When node1 is restarted later, it will start replicating
-again and eventually catch up after a while.  At this point the whole
-procedure is executed with exchanged node IDs and the original
-configuration is restored.&#13;</P
+> You may now simply shutdown the server hosting node1 and do
+whatever is required to maintain the server.  When <B
+CLASS="APPLICATION"
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+> node1 is restarted later,
+it will start replicating again, and soon catch up.  At this point the
+procedure to switch origins is executed again to restore the original
+configuration.</P
+><P
+> This is the preferred way to handle things; it runs quickly,
+under control of the administrators, and there is no need for there to
+be any loss of data.</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN722"
+NAME="AEN845"
 >7.3. Failover</A
 ></H2
 ><P
-> Because of the possibility of missing not-yet-replicated
-transactions that are committed, failover is the worst thing that can
-happen in a master-slave replication scenario.  If there is any
-possibility to bring back the failed server even if only for a few
-minutes, we strongly recommend that you follow the switchover
-procedure above.&#13;</P
+> If some more serious problem occurs on the <SPAN
+CLASS="QUOTE"
+>"origin"</SPAN
+>
+server, it may be necessary to failover to a backup server.  This is a
+highly undesirable circumstance, as transactions <SPAN
+CLASS="QUOTE"
+>"committed"</SPAN
+> on
+the origin, but not applied to the subscribers, will be lost.  You may
+have reported these transactions as <SPAN
+CLASS="QUOTE"
+>"successful"</SPAN
+> to outside
+users.  As a result, failover should be considered a <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>last
+resort</I
+></SPAN
+>.  If the <SPAN
+CLASS="QUOTE"
+>"injured"</SPAN
+> origin server can be brought up to
+the point where it can limp along long enough to do a controlled
+switchover, that is <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>greatly</I
+></SPAN
+> preferable.</P
 ><P
 > Slony does not provide any automatic detection for failed
 systems.  Abandoning committed transactions is a business decision
 that cannot be made by a database.  If someone wants to put the
 commands below into a script executed automatically from the network
-monitoring system, well ... its your data.
+monitoring system, well ... it's <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>your</I
+></SPAN
+> data, and it's <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>your</I
+></SPAN
+> failover policy.
 
 <P
 ></P
 ><UL
 ><LI
 ><P
->	The slonik command
+> The <A
+HREF="app-slonik.html#SLONIK"
+> <B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+> command
+
 <TABLE
 BORDER="0"
 BGCOLOR="#E0E0E0"
@@ -227,32 +326,54 @@
 ></TD
 ></TR
 ></TABLE
->&#13;</P
+></P
 ><P
 > causes node2 to assume the ownership (origin) of all sets that
-have node1 as their current origin.  In the case there would be more
-nodes, All direct subscribers of node1 are instructed that this is
-happening.  Slonik would also query all direct subscribers to figure
-out which node has the highest replication status (latest committed
-transaction) for each set, and the configuration would be changed in a
-way that node2 first applies those last minute changes before actually
-allowing write access to the tables.&#13;</P
+have node1 as their current origin.  If there should happen to be
+additional nodes in the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster, all direct
+subscribers of node1 are instructed that this is happening.
+<B
+CLASS="APPLICATION"
+>Slonik</B
+> will also query all direct subscribers in order
+to determine out which node has the highest replication status
+(<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> - the latest committed transaction) for each set, and
+the configuration will be changed in a way that node2 first applies
+those final before actually allowing write access to the tables.</P
 ><P
-> In addition, all nodes that subscribed directly from node1 will
+> In addition, all nodes that subscribed directly to node1 will
 now use node2 as data provider for the set.  This means that after the
 failover command succeeded, no node in the entire replication setup
-will receive anything from node1 any more.  &#13;</P
+will receive anything from node1 any more.</P
 ></LI
 ><LI
 ><P
-> Reconfigure and restart the application (or pgpool)
-to cause it to reconnect to node2.&#13;</P
+> Reconfigure and restart the application (or <B
+CLASS="APPLICATION"
+>pgpool</B
+>)
+to cause it to reconnect to node2.</P
 ></LI
 ><LI
 ><P
 > After the failover is complete and node2 accepts
 write operations against the tables, remove all remnants of node1's
-configuration information with the slonik command
+configuration information with the <A
+HREF="app-slonik.html#SLONIK"
+> <B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+> command
 
 <TABLE
 BORDER="0"
@@ -269,25 +390,34 @@
 ></P
 ></LI
 ></UL
->&#13;</P
+></P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN737"
->7.4. After failover, getting back node1</A
+NAME="AEN876"
+>7.4. After Failover, Reconfiguring node1</A
 ></H2
 ><P
-> After the above failover, the data stored on node1 must be
-considered out of sync with the rest of the nodes.  Therefore, the
-only way to get node1 back and transfer the master role to it is to
-rebuild it from scratch as a slave, let it catch up and then follow
-the switchover procedure.
-
-
- </P
+> After the above failover, the data stored on node1 is
+considered out of sync with the rest of the nodes, and must be treated
+as corrupt.  Therefore, the only way to get node1 back and transfer
+the origin role back to it is to rebuild it from scratch as a
+subscriber, let it catch up, and then follow the switchover
+procedure.</P
+><P
+> If the database is very large, it may take many hours to
+recover node1 as a functioning <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> node; that is
+another reason to consider failover as an undesirable <SPAN
+CLASS="QUOTE"
+>"final
+resort."</SPAN
+></P
 ></DIV
 ></DIV
 ><DIV
Index: addthings.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/addthings.html -Ldoc/adminguide/addthings.html -u -w -r1.2 -r1.3
--- doc/adminguide/addthings.html
+++ doc/adminguide/addthings.html
@@ -87,53 +87,70 @@
 ><P
 >This can be fairly easily remedied.&#13;</P
 ><P
->You cannot directly use <TT
+>You cannot directly use <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+>
+commands <TT
 CLASS="COMMAND"
 >SET ADD TABLE</TT
 > or <TT
 CLASS="COMMAND"
->SET
-ADD SEQUENCE</TT
-> in order to add tables and sequences to a replication
-set that is presently replicating; you must instead create a new
-replication set.  Once it is identically subscribed (e.g. - the set of
+>SET ADD SEQUENCE</TT
+> in
+order to add tables and sequences to a replication set that is
+presently replicating; you must instead create a new replication set.
+Once it is identically subscribed (e.g. - the set of providers and
 subscribers is <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
->identical</I
+>entirely identical</I
 ></SPAN
-> to that for the set it is to merge
-with), the sets may be merged together using <TT
+> to that for the set it is
+to merge with), the sets may be merged together using <TT
 CLASS="COMMAND"
->MERGE SET</TT
+>MERGE
+SET</TT
 >.&#13;</P
 ><P
->Up to and including 1.0.2, there is a potential problem where if
-<TT
+>Up to and including 1.0.2, there was a potential problem where
+if <TT
 CLASS="COMMAND"
 >MERGE_SET</TT
-> is issued when other subscription-related events
-are pending, it is possible for things to get pretty confused on the
-nodes where other things were pending.  This problem was resolved in
-1.0.5.&#13;</P
+> is issued while other subscription-related
+events are pending, it is possible for things to get pretty confused
+on the nodes where other things were pending.  This problem was
+resolved in 1.0.5.&#13;</P
 ><P
 >It is suggested that you be very deliberate when adding such
 things.  For instance, submitting multiple subscription requests for a
-particular set in one Slonik script often turns out quite badly.  If
-it is truly necessary to automate this, you'll probably want to submit
-<TT
+particular set in one <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> script
+often turns out quite badly.  If it is <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>truly</I
+></SPAN
+> necessary to
+automate this, you'll probably want to submit <TT
 CLASS="COMMAND"
 >WAIT FOR EVENT</TT
-> requests in between subscription requests in
-order that the Slonik script wait for one subscription to complete
-processing before requesting the next one.&#13;</P
+>
+requests in between subscription requests in order that the <A
+HREF="app-slonik.html#SLONIK"
+> slonik </A
+> script wait for one subscription to
+complete processing before requesting the next one.&#13;</P
 ><P
 >But in general, it is likely to be easier to cope with complex
 node reconfigurations by making sure that one change has been
 successfully processed before going on to the next.  It's way easier
-to fix one thing that has broken than the interaction of five things
-that have broken.
+to fix one thing that has broken than to piece things together after
+the interaction of five things that have all broken.
 
 
  </P
Index: subscribenodes.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/subscribenodes.sgml -Ldoc/adminguide/subscribenodes.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/subscribenodes.sgml
+++ doc/adminguide/subscribenodes.sgml
@@ -1,14 +1,15 @@
 <sect1 id="subscribenodes"> <title/ Subscribing Nodes/
 
 <para>Before you subscribe a node to a set, be sure that you have
-<application/slon/s running for both the master and the new
-subscribing node. If you don't have slons running, nothing will
-happen, and you'll beat your head against a wall trying to figure out
-what is going on.
-
-<para>Subscribing a node to a set is done by issuing the slonik
-command <command/subscribe set/. It may seem tempting to try to
-subscribe several nodes to a set within a single try block like this:
+<application><link linkend="slon"> slon </link></application>
+processes running for both the provider and the new subscribing node. If
+you don't have slons running, nothing will happen, and you'll beat
+your head against a wall trying to figure out what is going on.
+
+<para>Subscribing a node to a set is done by issuing the <link
+linkend="slonik"> slonik </link> command <command/subscribe set/. It
+may seem tempting to try to subscribe several nodes to a set within a
+single try block like this:
 
 <programlisting>
 try {
@@ -23,14 +24,15 @@
 </programlisting>
 
 
-<para> You are just asking for trouble if you try to subscribe sets in
-that fashion. The proper procedure is to subscribe one node at a time,
-and to check the logs and databases before you move onto subscribing
-the next node to the set. It is also worth noting that the
-<quote/success/ within the above slonik try block does not imply that
-nodes 2, 3, and 4 have all been successfully subscribed. It merely
-indicates that the slonik commands were successfully received by the
-<application/slon/ running on the master node.
+<para> But you are just asking for trouble if you try to subscribe
+sets in that fashion. The proper procedure is to subscribe one node at
+a time, and to check the logs and databases before you move onto
+subscribing the next node to the set. It is also worth noting that the
+<quote/success/ within the above <link linkend="slonik">
+<application/slonik/ </link> try block does not imply that nodes 2, 3,
+and 4 have all been successfully subscribed. It merely indicates that
+the slonik commands were successfully received by the
+<application/slon/ running on the origin node.
 
 <para>A typical sort of problem that will arise is that a cascaded
 subscriber is looking for a provider that is not ready yet.  In that
@@ -42,24 +44,25 @@
 node is stuck on the attempt to subscribe it.
 
 <para>When you subscribe a node to a set, you should see something
-like this in your slony logs for the master node:
+like this in your <application/slon/ logs for the provider node:
 
 <screen>
 DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET
 </screen>
 
-<para>You should also start seeing log entries like this in the slony logs for the subscribing node:
+<para>You should also start seeing log entries like this in the
+<application/slon/ logs for the subscribing node:
 
 <screen>
 DEBUG2 remoteWorkerThread_1: copy table public.my_table
 </screen>
 
 <para>It may take some time for larger tables to be copied from the
-master node to the new subscriber. If you check the pg_stat_activity
-table on the master node, you should see a query that is copying the
+provider node to the new subscriber. If you check the pg_stat_activity
+table on the provider node, you should see a query that is copying the
 table to stdout.
 
-<para>The table <envar/sl_subscribe/ on both the master, and the new
+<para>The table <envar/sl_subscribe/ on both the provider, and the new
 subscriber should contain entries for the new subscription:
 
 <screen>
@@ -69,8 +72,9 @@
 </screen>
 
 <para>A final test is to insert a row into one of the replicated
-tables on the master node, and verify that the row is copied to the
+tables on the origin node, and verify that the row is copied to the
 new subscriber.
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: reshape.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/reshape.sgml
+++ doc/adminguide/reshape.sgml
@@ -7,33 +7,38 @@
 <itemizedlist>
 
 <listitem><para> If you want a node that is a subscriber to become the
-"master" provider for a particular replication set, you will have to
-issue the slonik MOVE SET operation to change that "master" provider
-node.
+origin for a particular replication set, you will have to issue a
+suitable <link linkend="slonik"> slonik </link> <command/MOVE SET/
+operation.
 
 <listitem><para> You may subsequently, or instead, wish to modify the
 subscriptions of other nodes.  You might want to modify a node to get
 its data from a different provider, or to change it to turn forwarding
-on or off.  This can be accomplished by issuing the slonik SUBSCRIBE
-SET operation with the new subscription information for the node;
-Slony-I will change the configuration.
+on or off.  This can be accomplished by issuing the slonik
+<command/SUBSCRIBE SET/ operation with the new subscription
+information for the node; <productname/Slony-I/ will change the
+configuration.
 
 <listitem><para> If the directions of data flows have changed, it is
-doubtless appropriate to issue a set of DROP LISTEN operations to drop
-out obsolete paths between nodes and SET LISTEN to add the new ones.
-At present, this is not changed automatically; at some point, MOVE SET
-and SUBSCRIBE SET might change the paths as a side-effect.  See
-SlonyListenPaths for more information about this.  In version 1.1 and
-later, it is likely that the generation of sl_listen entries will be
-entirely automated, where they will be regenerated when changes are
-made to sl_path or sl_subscribe, thereby making it unnecessary to even
-think about SET LISTEN.
+doubtless appropriate to issue a set of <command/DROP LISTEN/
+operations to drop out obsolete paths between nodes and <command/SET
+LISTEN/ to add the new ones.  At present, this is not changed
+automatically; at some point, <command/MOVE SET/ and
+<command/SUBSCRIBE SET/ might change the paths as a side-effect.  See
+<link linkend="ListenPaths"> Slony Listen Paths </link> for more
+information about this.  In version 1.1 and later, it is likely that
+the generation of sl_listen entries will be entirely automated, where
+they will be regenerated when changes are made to sl_path or
+sl_subscribe, thereby making it unnecessary to even think about
+<command/SET LISTEN/.
 
 </itemizedlist>
 
-<para> The "altperl" toolset includes a "init_cluster.pl" script that
-is quite up to the task of creating the new SET LISTEN commands; it
-isn't smart enough to know what listener paths should be dropped.
+<para> The <filename/altperl/ toolset includes a
+<application/init_cluster.pl/ script that is quite up to the task of
+creating the new <command/SET LISTEN/ commands; it isn't, however,
+smart enough to know what listener paths should be dropped.
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -5,32 +5,35 @@
 
 <para>This can be fairly easily remedied.
 
-<para>You cannot directly use <command/SET ADD TABLE/ or <command/SET
-ADD SEQUENCE/ in order to add tables and sequences to a replication
-set that is presently replicating; you must instead create a new
-replication set.  Once it is identically subscribed (e.g. - the set of
-subscribers is <emphasis/identical/ to that for the set it is to merge
-with), the sets may be merged together using <command/MERGE SET/.
+<para>You cannot directly use <link linkend="slonik"> slonik </link>
+commands <command/SET ADD TABLE/ or <command/SET ADD SEQUENCE/ in
+order to add tables and sequences to a replication set that is
+presently replicating; you must instead create a new replication set.
+Once it is identically subscribed (e.g. - the set of providers and
+subscribers is <emphasis/entirely identical/ to that for the set it is
+to merge with), the sets may be merged together using <command/MERGE
+SET/.
 
-<para>Up to and including 1.0.2, there is a potential problem where if
-<command/MERGE_SET/ is issued when other subscription-related events
-are pending, it is possible for things to get pretty confused on the
-nodes where other things were pending.  This problem was resolved in
-1.0.5.
+<para>Up to and including 1.0.2, there was a potential problem where
+if <command/MERGE_SET/ is issued while other subscription-related
+events are pending, it is possible for things to get pretty confused
+on the nodes where other things were pending.  This problem was
+resolved in 1.0.5.
 
 <para>It is suggested that you be very deliberate when adding such
 things.  For instance, submitting multiple subscription requests for a
-particular set in one Slonik script often turns out quite badly.  If
-it is truly necessary to automate this, you'll probably want to submit
-<command/WAIT FOR EVENT/ requests in between subscription requests in
-order that the Slonik script wait for one subscription to complete
-processing before requesting the next one.
+particular set in one <link linkend="Slonik"> slonik </link> script
+often turns out quite badly.  If it is <emphasis/truly/ necessary to
+automate this, you'll probably want to submit <command/WAIT FOR EVENT/
+requests in between subscription requests in order that the <link
+linkend="slonik"> slonik </link> script wait for one subscription to
+complete processing before requesting the next one.
 
 <para>But in general, it is likely to be easier to cope with complex
 node reconfigurations by making sure that one change has been
 successfully processed before going on to the next.  It's way easier
-to fix one thing that has broken than the interaction of five things
-that have broken.
+to fix one thing that has broken than to piece things together after
+the interaction of five things that have all broken.
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: startslons.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/startslons.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/startslons.sgml -Ldoc/adminguide/startslons.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/startslons.sgml
+++ doc/adminguide/startslons.sgml
@@ -1,38 +1,39 @@
 <sect1 id="slonstart"> <title/Slon daemons/
 
-<para>The programs that actually perform Slony-I replication are the
+<para>The programs that actually perform <productname/Slony-I/ replication are the
 <application/slon/ daemons.
 
-<para>You need to run one <application/slon/ instance for each node in
-a Slony-I cluster, whether you consider that node a <quote/master/ or
-a <quote/slave./ Since a <command/MOVE SET/ or <command/FAILOVER/ can
-switch the roles of nodes, slon needs to be able to function for both
-providers and subscribers.  It is not essential that these daemons run
-on any particular host, but there are some principles worth
-considering:
+<para>You need to run one <application><link linkend="slon"> slon
+</link></application> instance for each node in a
+<productname/Slony-I/ cluster, whether you consider that node a
+<quote/master/ or a <quote/slave./ Since a <command/MOVE SET/ or
+<command/FAILOVER/ can switch the roles of nodes, slon needs to be
+able to function for both providers and subscribers.  It is not
+essential that these daemons run on any particular host, but there are
+some principles worth considering:
 
 <itemizedlist>
 
-<listitem><Para> Each slon needs to be able to communicate quickly
-with the database whose <quote/node controller/ it is.  Therefore, if
-a Slony-I cluster runs across some form of Wide Area Network, each
-slon process should run on or nearby the databases each is
-controlling.  If you break this rule, no particular disaster should
-ensue, but the added latency introduced to monitoring events on the
-slon's <quote/own node/ will cause it to replicate in a
-<emphasis/somewhat/ less timely manner.
-
-<listitem><Para> The fastest results would be achieved by having each
-slon run on the database server that it is servicing.  If it runs
-somewhere within a fast local network, performance will not be
-noticeably degraded.
+<listitem><Para> Each <application/slon/ needs to be able to
+communicate quickly with the database whose <quote/node controller/ it
+is.  Therefore, if a <productname/Slony-I/ cluster runs across some
+form of Wide Area Network, each slon process should run on or nearby
+the databases each is controlling.  If you break this rule, no
+particular disaster should ensue, but the added latency introduced to
+monitoring events on the slon's <quote/own node/ will cause it to
+replicate in a <emphasis/somewhat/ less timely manner.
+
+<listitem><Para> The very fastest results would be achieved by having
+each <application/slon/ run on the database server that it is
+servicing.  If it runs somewhere within a fast local network,
+performance will not be noticeably degraded.
 
 <listitem><Para> It is an attractive idea to run many of the
 <application/slon/ processes for a cluster on one machine, as this
 makes it easy to monitor them both in terms of log files and process
-tables from one location.  This eliminates the need to login to
-several hosts in order to look at log files or to restart <application/slon/
-instances.
+tables from one location.  This also eliminates the need to login to
+several hosts in order to look at log files or to restart
+<application/slon/ instances.
 
 </itemizedlist>
 
@@ -40,29 +41,34 @@
 
 <itemizedlist>
 
-<listitem><Para> <filename>tools/altperl/slon_watchdog.pl</filename> -
-an <quote/early/ version that basically wraps a loop around the
-invocation of <application/slon/, restarting any time it falls over
+<listitem><para> <filename>tools/altperl/slon_watchdog.pl</filename> -
+an <quote>early</quote> version that basically wraps a loop around the
+invocation of <application><link linkend="slon"> slon
+</link></application>, restarting any time it falls over</para>
+</listitem>
 
 <listitem><Para> <filename>tools/altperl/slon_watchdog2.pl</filename>
 - a somewhat more intelligent version that periodically polls the
 database, checking to see if a <command/SYNC/ has taken place
 recently.  We have had VPN connections that occasionally fall over
-without signalling the application, so that the <application/slon/
-stops working, but doesn't actually die; this polling addresses that
-issue.
+without signalling the application, so that the <application><link
+linkend="slon"> slon </link></application> stops working, but doesn't
+actually die; this polling addresses that issue.
 
 </itemizedlist>
 
-<para>The <filename/slon_watchdog2.pl/ script is probably
-<emphasis/usually/ the preferable thing to run.  It was at one point
-not preferable to run it whilst subscribing a very large replication
-set where it is expected to take many hours to do the initial
-<command/COPY SET/.  The problem that came up in that case was that it
-figured that since it hasn't done a <command/SYNC/ in 2 hours,
-something was broken requiring restarting slon, thereby restarting the
-<command/COPY SET/ event.  More recently, the script has been changed
-to detect <command/COPY SET/ in progress.
+<para>The <filename>slon_watchdog2.pl</filename> script is probably
+<emphasis>usually</emphasis> the preferable thing to run.  It was at
+one point not preferable to run it whilst subscribing a very large
+replication set where it is expected to take many hours to do the
+initial <command>COPY SET</command>.  The problem that came up in that
+case was that it figured that since it hasn't done a
+<command>SYNC</command> in 2 hours, something was broken requiring
+restarting slon, thereby restarting the <command>COPY SET</command>
+event.  More recently, the script has been changed to detect
+<command>COPY SET</command> in progress.</para>
+
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
--- /dev/null
+++ doc/adminguide/app-slonik.html
@@ -0,0 +1,292 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>slonik</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+TITLE="Slony-I Commands"
+HREF="slony-commands.html"><LINK
+REL="PREVIOUS"
+TITLE="slon"
+HREF="app-slon.html"><LINK
+REL="NEXT"
+HREF="slonyadmin.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="REFENTRY"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="app-slon.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="slonyadmin.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><H1
+><A
+NAME="APP-SLONIK"
+></A
+><B
+CLASS="APPLICATION"
+>slonik</B
+></H1
+><DIV
+CLASS="REFNAMEDIV"
+><A
+NAME="AEN454"
+></A
+><H2
+>Name</H2
+><B
+CLASS="APPLICATION"
+>slonik</B
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> command processor
+    </DIV
+><DIV
+CLASS="REFSYNOPSISDIV"
+><A
+NAME="AEN461"
+></A
+><H2
+>Synopsis</H2
+><P
+><TT
+CLASS="COMMAND"
+>slonik</TT
+> [<TT
+CLASS="REPLACEABLE"
+><I
+>filename</I
+></TT
+>
+  ]</P
+></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="AEN466"
+></A
+><H2
+>Description</H2
+><P
+>     <B
+CLASS="APPLICATION"
+>slonik</B
+> is the command processor
+     application that is used to set up and modify configurations of
+     <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> replication clusters.
+    </P
+></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="AEN471"
+></A
+><H2
+> Outline</H2
+><P
+>The <B
+CLASS="APPLICATION"
+>slonik</B
+> command line utility is
+  supposed to be used embedded into shell scripts and reads commands
+  from files or stdin.</P
+><P
+>It reads a set of Slonik statements, which are written in a
+  scripting language with syntax similar to that of SQL, and performs
+  the set of configuration changes on the slony nodes specified in the
+  script.</P
+><P
+>Nearly all of the real configuration work is actually done by
+  calling stored procedures after loading the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+  support base into a database.  <B
+CLASS="APPLICATION"
+>Slonik</B
+> was created
+  because these stored procedures have special requirements as to on
+  which particular node in the replication system they are called.
+  The absence of named parameters for stored procedures makes it
+  rather hard to do this from the <B
+CLASS="APPLICATION"
+>psql</B
+> prompt, and
+  <B
+CLASS="APPLICATION"
+>psql</B
+> lacks the ability to maintain multiple
+  connections with open transactions to multiple databases.</P
+><P
+>The format of the Slonik <SPAN
+CLASS="QUOTE"
+>"language"</SPAN
+> is very similar to
+  that of SQL, and the parser is based on a similar set of formatting
+  rules for such things as numbers and strings.  Note that slonik is
+  declarative, using literal values throughout, and does
+  <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> have the notion of variables.  It is
+  anticipated that Slonik scripts will typically be
+  <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>generated</I
+></SPAN
+> by scripts, such as Bash or Perl, and
+  these sorts of scripting languages already have perfectly good ways
+  of managing variables, doing iteration, and such.</P
+><P
+>A detailed list of Slonik commands can be found here: <A
+HREF="http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands"
+TARGET="_top"
+>  slonik commands </A
+></P
+></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="AEN487"
+></A
+><H2
+>Exit Status</H2
+><P
+>   <B
+CLASS="APPLICATION"
+>slonik</B
+> returns 0 to the shell if it
+   finished normally.  Scripts may specify return codes.   
+  </P
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="app-slon.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="slonyadmin.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><B
+CLASS="APPLICATION"
+>slon</B
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony-commands.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+></TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file
Index: installation.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/installation.html -Ldoc/adminguide/installation.html -u -w -r1.2 -r1.3
--- doc/adminguide/installation.html
+++ doc/adminguide/installation.html
@@ -82,7 +82,11 @@
 >3. Slony-I Installation</A
 ></H1
 ><P
->You should have obtained the Slony-I source from the previous step. Unpack it.</P
+>You should have obtained the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> source from
+the previous step. Unpack it.</P
 ><TABLE
 BORDER="0"
 BGCOLOR="#E0E0E0"
@@ -97,15 +101,18 @@
 ></TR
 ></TABLE
 ><P
-> This will create a directory Slony-I under the current
-directory with the Slony-I sources.  Head into that that directory for
-the rest of the installation procedure.</P
+> This will create a directory under the current directory with
+the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> sources.  Head into that that directory for the rest of
+the installation procedure.</P
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN175"
+NAME="AEN196"
 >3.1. Short Version</A
 ></H2
 ><P
@@ -129,21 +136,44 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN179"
+NAME="AEN200"
 >3.2. Configuration</A
 ></H2
 ><P
->The first step of the installation procedure is to configure the source tree
-for your system.  This is done by running the configure script.  Configure
-needs to know where your PostgreSQL source tree is, this is done with the
---with-pgsourcetree= option.</P
+>The first step of the installation procedure is to configure the
+source tree for your system.  This is done by running the
+<B
+CLASS="APPLICATION"
+>configure</B
+> script.  In early versions,
+<B
+CLASS="APPLICATION"
+>Configure</B
+> needed to know where your
+<SPAN
+CLASS="PRODUCTNAME"
+>PostgreSQL</SPAN
+> source tree is, is done with the
+<CODE
+CLASS="OPTION"
+>--with-pgsourcetree=</CODE
+> option.  As of version 1.1,
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> is configured by pointing it to the various
+library, binary, and include directories; for a full list of these
+options, use the command <TT
+CLASS="COMMAND"
+> ./configure --help </TT
+></P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN182"
+NAME="AEN209"
 >3.3. Example</A
 ></H2
 ><TABLE
@@ -174,7 +204,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN187"
+NAME="AEN214"
 >3.4. Build</A
 ></H2
 ><P
@@ -194,11 +224,18 @@
 ></TABLE
 ></P
 ><P
-> Be sure to use GNU make; on BSD systems, it is called gmake; on
-Linux, GNU make is typically the native "make", so the name of the
-command you type in may vary somewhat. The build may take anywhere
-from a few seconds to 2 minutes depending on how fast your hardware is
-at compiling things.  The last line displayed should be</P
+> Be sure to use GNU make; on BSD systems, it is called
+<B
+CLASS="APPLICATION"
+>gmake</B
+>; on Linux, GNU make is typically the native
+<B
+CLASS="APPLICATION"
+>make</B
+>, so the name of the command you type in may vary
+somewhat. The build may take anywhere from a few seconds to 2 minutes
+depending on how fast your hardware is at compiling things.  The last
+line displayed should be</P
 ><P
 > <TT
 CLASS="COMMAND"
@@ -211,7 +248,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN194"
+NAME="AEN223"
 >3.5. Installing Slony-I</A
 ></H2
 ><P
@@ -226,10 +263,10 @@
 specified by the <CODE
 CLASS="OPTION"
 >--prefix</CODE
-> option used in the PostgreSQL
-configuration.  Make sure you have appropriate permissions to write
-into that area.  Normally you need to do this either as root or as the
-postgres user.</P
+> option used in the
+PostgreSQL configuration.  Make sure you have appropriate permissions
+to write into that area.  Normally you need to do this either as root
+or as the postgres user.  </P
 ></DIV
 ></DIV
 ><DIV
Index: failover.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/failover.sgml
+++ doc/adminguide/failover.sgml
@@ -1,39 +1,45 @@
-<sect1 id="failover"> <title/Doing switchover and failover with Slony-I/
+<sect1 id="failover"> <title>Doing switchover and failover with Slony-I</title>
 
-<sect2><title/Foreword/
+<sect2><title>Foreword</title>
 
-<para> Slony-I is an asynchronous replication system.  Because of
-that, it is almost certain that at the moment the current origin of a
-set fails, the last transactions committed have not propagated to the
-subscribers yet.  They always fail under heavy load, and you know it.
-Thus the goal is to prevent the main server from failing.  The best
-way to do that is frequent maintenance.
-
-<para> Opening the case of a running server is not exactly what we all
-consider professional system maintenance.  And interestingly, those
-users who use replication for backup and failover purposes are usually
-the ones that have a very low tolerance for words like "downtime".  To
-meet these requirements, Slony-I has not only failover capabilities,
-but controlled master role transfer features too.
+<para> <productname>Slony-I</productname> is an asynchronous
+replication system.  Because of that, it is almost certain that at the
+moment the current origin of a set fails, the final transactions
+committed at the origin will have not yet propagated to the
+subscribers.  Systems are particularly likely to fail under heavy
+load; that is one of the corollaries of Murphy's Law.  Therefore the
+principal goal is to <emphasis>prevent</emphasis> the main server from
+failing.  The best way to do that is frequent maintenance.</para>
+
+<para> Opening the case of a running server is not exactly what we
+should consider a <quote>professional</quote> way to do system
+maintenance.  And interestingly, those users who found it valuable to
+use replication for backup and failover purposes are the very ones
+that have the lowest tolerance for terms like <quote>system
+downtime.</quote> To help support these requirements, Slony-I has not
+only failover capabilities, but features for controlled origin
+transfer.</para>
 
 <para> It is assumed in this document that the reader is familiar with
-the slonik utility and knows at least how to set up a simple 2 node
-replication system with Slony-I.
-
-<sect2><title/ Switchover/
-
-<para> We assume a current <quote/origin/ as node1 (AKA master) with
-one <quote/subscriber/ as node2 (AKA slave).  A web application on a
-third server is accessing the database on node1.  Both databases are
-up and running and replication is more or less in sync.
+the <link linkend="slonik"> <application>slonik</application> </link>
+utility and knows at least how to set up a simple 2 node replication
+system with <productname>Slony-I</productname>.</para></sect2>
+
+<sect2><title> Controlled Switchover</title>
+
+<para> We assume a current <quote>origin</quote> as node1 with one
+<quote>subscriber</quote> as node2 (<emphasis>e.g.</emphasis> -
+slave).  A web application on a third server is accessing the database
+on node1.  Both databases are up and running and replication is more
+or less in sync.
 
 <itemizedlist>
 
 <listitem><para> At the time of this writing switchover to another
 server requires the application to reconnect to the database.  So in
 order to avoid any complications, we simply shut down the web server.
-Users who use <application/pg_pool/ for the applications database
-connections merely have to shut down the pool.
+Users who use <application>pg_pool</application> for the applications database
+connections merely have to shut down the pool.</para></listitem>
 
 <listitem><para> A small slonik script executes the following commands:
 
@@ -42,84 +48,102 @@
 	wait for event (origin = 1, confirmed = 2);
 	move set (id = 1, old origin = 1, new origin = 2);
 	wait for event (origin = 1, confirmed = 2);
-</programlisting>
+</programlisting></para>
 
-<para> After these commands, the origin (master role) of data set 1 is
-now on node2.  It is not simply transferred.  It is done in a fashion
-so that node1 is now a fully synchronized subscriber actively
-replicating the set.  So the two nodes completely switched roles.
-
-<listitem><Para> After reconfiguring the web application (or pgpool)
-to connect to the database on node2 instead, the web server is
-restarted and resumes normal operation.
-
-<para> Done in one shell script, that does the shutdown, slonik, move
-config files and startup all together, this entire procedure takes
-less than 10 seconds.
-
-</itemizedlist>
-
-<para> It is now possible to simply shutdown node1 and do whatever is
-required.  When node1 is restarted later, it will start replicating
-again and eventually catch up after a while.  At this point the whole
-procedure is executed with exchanged node IDs and the original
-configuration is restored.
-
-<sect2><title/ Failover/
-
-<para> Because of the possibility of missing not-yet-replicated
-transactions that are committed, failover is the worst thing that can
-happen in a master-slave replication scenario.  If there is any
-possibility to bring back the failed server even if only for a few
-minutes, we strongly recommend that you follow the switchover
-procedure above.
+<para> After these commands, the origin (master role) of data set 1
+has been transferred to node2.  And it is not simply transferred; it
+is done in a fashion such that node1 becomes a fully synchronized
+subscriber, actively replicating the set.  So the two nodes have
+switched roles completely.</para></listitem>
+
+<listitem><para> After reconfiguring the web application (or
+<application>pgpool</application>) to connect to the database on node2, the web
+server is restarted and resumes normal operation.</para>
+
+<para> Done in one shell script, that does the application shutdown,
+<application>slonik</application>, move config files and startup all together, this
+entire procedure is likely to take less than 10 seconds.</para></listitem>
+
+</itemizedlist></para>
+
+<para> You may now simply shutdown the server hosting node1 and do
+whatever is required to maintain the server.  When <application><link linkend="slon"> slon </link></application> node1 is restarted later,
+it will start replicating again, and soon catch up.  At this point the
+procedure to switch origins is executed again to restore the original
+configuration.</para>
+
+<para> This is the preferred way to handle things; it runs quickly,
+under control of the administrators, and there is no need for there to
+be any loss of data.</para></sect2>
+
+<sect2><title> Failover</title>
+
+<para> If some more serious problem occurs on the <quote>origin</quote>
+server, it may be necessary to failover to a backup server.  This is a
+highly undesirable circumstance, as transactions <quote>committed</quote> on
+the origin, but not applied to the subscribers, will be lost.  You may
+have reported these transactions as <quote>successful</quote> to outside
+users.  As a result, failover should be considered a <emphasis>last
+resort</emphasis>.  If the <quote>injured</quote> origin server can be brought up to
+the point where it can limp along long enough to do a controlled
+switchover, that is <emphasis>greatly</emphasis> preferable.</para>
 
 <para> Slony does not provide any automatic detection for failed
 systems.  Abandoning committed transactions is a business decision
 that cannot be made by a database.  If someone wants to put the
 commands below into a script executed automatically from the network
-monitoring system, well ... its your data.
+monitoring system, well ... it's <emphasis>your</emphasis> data, and it's <emphasis>your</emphasis> failover policy.
 
 <itemizedlist>
-<listitem><para>
-	The slonik command
+
+<listitem><para> The <link linkend="slonik"> <application>slonik</application> </link> command
+
 <programlisting>
 	failover (id = 1, backup node = 2);
-</programlisting>
+</programlisting></para>
 
 <para> causes node2 to assume the ownership (origin) of all sets that
-have node1 as their current origin.  In the case there would be more
-nodes, All direct subscribers of node1 are instructed that this is
-happening.  Slonik would also query all direct subscribers to figure
-out which node has the highest replication status (latest committed
-transaction) for each set, and the configuration would be changed in a
-way that node2 first applies those last minute changes before actually
-allowing write access to the tables.
+have node1 as their current origin.  If there should happen to be
+additional nodes in the <productname>Slony-I</productname> cluster, all direct
+subscribers of node1 are instructed that this is happening.
+<application>Slonik</application> will also query all direct subscribers in order
+to determine out which node has the highest replication status
+(<emphasis>e.g.</emphasis> - the latest committed transaction) for each set, and
+the configuration will be changed in a way that node2 first applies
+those final before actually allowing write access to the tables.</para>
 
-<para> In addition, all nodes that subscribed directly from node1 will
+<para> In addition, all nodes that subscribed directly to node1 will
 now use node2 as data provider for the set.  This means that after the
 failover command succeeded, no node in the entire replication setup
-will receive anything from node1 any more.  
+will receive anything from node1 any more.</para></listitem>
 
-<listitem><para> Reconfigure and restart the application (or pgpool)
-to cause it to reconnect to node2.
+<listitem><para> Reconfigure and restart the application (or <application>pgpool</application>)
+to cause it to reconnect to node2.</para></listitem>
 
 <listitem><para> After the failover is complete and node2 accepts
 write operations against the tables, remove all remnants of node1's
-configuration information with the slonik command
+configuration information with the <link linkend="slonik"> <application>slonik</application> </link> command
 
 <programlisting>
 	drop node (id = 1, event node = 2);
-</programlisting>
-</itemizedlist>
+</programlisting></para></listitem>
+</itemizedlist></para></sect2>
+
+<sect2><title>After Failover, Reconfiguring node1</title>
 
-<sect2><title/After failover, getting back node1/
+<para> After the above failover, the data stored on node1 is
+considered out of sync with the rest of the nodes, and must be treated
+as corrupt.  Therefore, the only way to get node1 back and transfer
+the origin role back to it is to rebuild it from scratch as a
+subscriber, let it catch up, and then follow the switchover
+procedure.</para>
+
+<para> If the database is very large, it may take many hours to
+recover node1 as a functioning <productname>Slony-I</productname> node; that is
+another reason to consider failover as an undesirable <quote>final
+resort.</quote></para></sect2>
 
-<para> After the above failover, the data stored on node1 must be
-considered out of sync with the rest of the nodes.  Therefore, the
-only way to get node1 back and transfer the master role to it is to
-rebuild it from scratch as a slave, let it catch up and then follow
-the switchover procedure.
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: installation.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/installation.sgml
+++ doc/adminguide/installation.sgml
@@ -1,15 +1,16 @@
 <sect1 id="installation"> <title> Slony-I Installation</title>
 
-<para>You should have obtained the Slony-I source from the previous step. Unpack it.</para>
+<para>You should have obtained the <productname/Slony-I/ source from
+the previous step. Unpack it.</para>
 
 <screen>
 gunzip slony.tar.gz;
 tar xf slony.tar
 </screen>
 
-<para> This will create a directory Slony-I under the current
-directory with the Slony-I sources.  Head into that that directory for
-the rest of the installation procedure.</para>
+<para> This will create a directory under the current directory with
+the <productname/Slony-I/ sources.  Head into that that directory for the rest of
+the installation procedure.</para>
 
 <sect2><title> Short Version</title>
 
@@ -21,10 +22,16 @@
 
 <sect2><title> Configuration</title>
 
-<para>The first step of the installation procedure is to configure the source tree
-for your system.  This is done by running the configure script.  Configure
-needs to know where your PostgreSQL source tree is, this is done with the
---with-pgsourcetree= option.</para></sect2>
+<para>The first step of the installation procedure is to configure the
+source tree for your system.  This is done by running the
+<application/configure/ script.  In early versions,
+<application/Configure/ needed to know where your
+<productname/PostgreSQL/ source tree is, is done with the
+<option/--with-pgsourcetree=/ option.  As of version 1.1,
+<productname/Slony-I/ is configured by pointing it to the various
+library, binary, and include directories; for a full list of these
+options, use the command <command> ./configure --help </command>
+</para></sect2>
 
 <sect2><title> Example</title>
 
@@ -36,7 +43,8 @@
 various dependent variables and try to detect some quirks of your
 system.  Slony-I is known to need a modified version of libpq on
 specific platforms such as Solaris2.X on SPARC this patch can be found
-at <ulink url="http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz">
+at <ulink
+url="http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz">
 http://developer.postgresql.org/~wieck/slony1/download/threadsafe-libpq-742.diff.gz</ulink></para></sect2>
 
 
@@ -48,11 +56,12 @@
 gmake all
 </screen></para>
 
-<para> Be sure to use GNU make; on BSD systems, it is called gmake; on
-Linux, GNU make is typically the native "make", so the name of the
-command you type in may vary somewhat. The build may take anywhere
-from a few seconds to 2 minutes depending on how fast your hardware is
-at compiling things.  The last line displayed should be</para>
+<para> Be sure to use GNU make; on BSD systems, it is called
+<application/gmake/; on Linux, GNU make is typically the native
+<application/make/, so the name of the command you type in may vary
+somewhat. The build may take anywhere from a few seconds to 2 minutes
+depending on how fast your hardware is at compiling things.  The last
+line displayed should be</para>
 
 <para> <command> All of Slony-I is successfully made.  Ready to
 install.  </command></para></sect2>
@@ -66,11 +75,10 @@
 </command></para>
 
 <para>This will install files into postgresql install directory as
-specified by the <option>--prefix</option> option used in the PostgreSQL
-configuration.  Make sure you have appropriate permissions to write
-into that area.  Normally you need to do this either as root or as the
-postgres user.
-</para></sect2></sect1>
+specified by the <option>--prefix</option> option used in the
+PostgreSQL configuration.  Make sure you have appropriate permissions
+to write into that area.  Normally you need to do this either as root
+or as the postgres user.  </para></sect2></sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -1,12 +1,15 @@
 <sect1 id="monitoring"> <title/Monitoring/
 
-<para>Here are some of things that you may find in your Slony logs, and explanations of what they mean. 
+<para>Here are some of things that you may find in your
+<productname/Slony-I/ logs, and explanations of what they mean.
 
 <sect2><title/CONFIG notices/
 
-<para>These entries are pretty straightforward. They are informative messages about your configuration. 
+<para>These entries are pretty straightforward. They are informative
+messages about your configuration.
 
-<para>Here are some typical entries that you will probably run into in your logs:
+<para>Here are some typical entries that you will probably run into in
+your logs:
 
 <screen>
 CONFIG main: local node id = 1
@@ -18,9 +21,11 @@
 CONFIG main: configuration complete - starting threads
 </screen>
 
-<sect2><title/DEBUG Notices/
+<sect2><title>DEBUG Notices</title>
 
-<para>Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads:
+<para>Debug notices are always prefaced by the name of the thread that
+the notice originates from. You will see messages from the following
+threads:
 
 <screen>
 localListenThread: This is the local thread that listens for events on the local node.
@@ -35,6 +40,8 @@
 about how the threads work, what to expect in the logs after you run a
 slonik command...
 
+</sect1>
+
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
Index: slonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/slonik.sgml -Ldoc/adminguide/slonik.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/slonik.sgml
+++ doc/adminguide/slonik.sgml
@@ -1,4 +1,4 @@
-<refentry id="slonik">
+<refentry id="app-slonik">
 <refmeta>
     <refentrytitle id="app-slonik-title"><application>slonik</application></refentrytitle>
     <manvolnum>1</manvolnum>
@@ -6,13 +6,13 @@
   </refmeta>
 
   <refnamediv>
-    <refname><application>slonik</application></refname>
+    <refname><application id="slonik">slonik</application></refname>
     <refpurpose>
       <productname>Slony-I</productname> command processor
     </refpurpose>
   </refnamediv>
 
- <indexterm zone="slonik">
+ <indexterm zone="app-slonik">
   <primary>slonik</primary>
  </indexterm>
 
@@ -35,9 +35,9 @@
 
  <refsect1><title> Outline</title>
 
-  <para>The slonik command line utility is supposed to be used
-  embedded into shell scripts and reads commands from files or
-  stdin.</para>
+  <para>The <application>slonik</application> command line utility is
+  supposed to be used embedded into shell scripts and reads commands
+  from files or stdin.</para>
 
   <para>It reads a set of Slonik statements, which are written in a
   scripting language with syntax similar to that of SQL, and performs
@@ -45,13 +45,14 @@
   script.</para>
 
   <para>Nearly all of the real configuration work is actually done by
-  calling stored procedures after loading the Slony-I support base
-  into a database.  Slonik was created because these stored procedures
-  have special requirements as to on which particular node in the
-  replication system they are called.  The absence of named parameters
-  for stored procedures makes it rather hard to do this from the psql
-  prompt, and psql lacks the ability to maintain multiple connections
-  with open transactions to multiple databases.</para>
+  calling stored procedures after loading the <productname/Slony-I/
+  support base into a database.  <application/Slonik/ was created
+  because these stored procedures have special requirements as to on
+  which particular node in the replication system they are called.
+  The absence of named parameters for stored procedures makes it
+  rather hard to do this from the <application/psql/ prompt, and
+  <application/psql/ lacks the ability to maintain multiple
+  connections with open transactions to multiple databases.</para>
 
   <para>The format of the Slonik <quote/language/ is very similar to
   that of SQL, and the parser is based on a similar set of formatting
@@ -61,7 +62,7 @@
   anticipated that Slonik scripts will typically be
   <emphasis>generated</emphasis> by scripts, such as Bash or Perl, and
   these sorts of scripting languages already have perfectly good ways
-  of managing variables.</para>
+  of managing variables, doing iteration, and such.</para>
   
   <para>A detailed list of Slonik commands can be found here: <ulink
   url="http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands">
Index: slonstart.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonstart.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/slonstart.html -Ldoc/adminguide/slonstart.html -u -w -r1.2 -r1.3
--- doc/adminguide/slonstart.html
+++ doc/adminguide/slonstart.html
@@ -81,7 +81,10 @@
 >2. Slon daemons</A
 ></H1
 ><P
->The programs that actually perform Slony-I replication are the
+>The programs that actually perform <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> replication are the
 <B
 CLASS="APPLICATION"
 >slon</B
@@ -89,46 +92,58 @@
 ><P
 >You need to run one <B
 CLASS="APPLICATION"
->slon</B
-> instance for each node in
-a Slony-I cluster, whether you consider that node a <SPAN
+><A
+HREF="app-slon.html#SLON"
+> slon</A
+></B
+> instance for each node in a
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster, whether you consider that node a
+<SPAN
 CLASS="QUOTE"
 >"master"</SPAN
-> or
-a <SPAN
+> or a <SPAN
 CLASS="QUOTE"
 >"slave."</SPAN
 > Since a <TT
 CLASS="COMMAND"
 >MOVE SET</TT
-> or <TT
+> or
+<TT
 CLASS="COMMAND"
 >FAILOVER</TT
-> can
-switch the roles of nodes, slon needs to be able to function for both
-providers and subscribers.  It is not essential that these daemons run
-on any particular host, but there are some principles worth
-considering:
+> can switch the roles of nodes, slon needs to be
+able to function for both providers and subscribers.  It is not
+essential that these daemons run on any particular host, but there are
+some principles worth considering:
 
 <P
 ></P
 ><UL
 ><LI
 ><P
-> Each slon needs to be able to communicate quickly
-with the database whose <SPAN
+> Each <B
+CLASS="APPLICATION"
+>slon</B
+> needs to be able to
+communicate quickly with the database whose <SPAN
 CLASS="QUOTE"
 >"node controller"</SPAN
-> it is.  Therefore, if
-a Slony-I cluster runs across some form of Wide Area Network, each
-slon process should run on or nearby the databases each is
-controlling.  If you break this rule, no particular disaster should
-ensue, but the added latency introduced to monitoring events on the
-slon's <SPAN
+> it
+is.  Therefore, if a <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> cluster runs across some
+form of Wide Area Network, each slon process should run on or nearby
+the databases each is controlling.  If you break this rule, no
+particular disaster should ensue, but the added latency introduced to
+monitoring events on the slon's <SPAN
 CLASS="QUOTE"
 >"own node"</SPAN
-> will cause it to replicate in a
-<SPAN
+> will cause it to
+replicate in a <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
@@ -138,10 +153,13 @@
 ></LI
 ><LI
 ><P
-> The fastest results would be achieved by having each
-slon run on the database server that it is servicing.  If it runs
-somewhere within a fast local network, performance will not be
-noticeably degraded.&#13;</P
+> The very fastest results would be achieved by having
+each <B
+CLASS="APPLICATION"
+>slon</B
+> run on the database server that it is
+servicing.  If it runs somewhere within a fast local network,
+performance will not be noticeably degraded.&#13;</P
 ></LI
 ><LI
 ><P
@@ -151,12 +169,12 @@
 >slon</B
 > processes for a cluster on one machine, as this
 makes it easy to monitor them both in terms of log files and process
-tables from one location.  This eliminates the need to login to
-several hosts in order to look at log files or to restart <B
+tables from one location.  This also eliminates the need to login to
+several hosts in order to look at log files or to restart
+<B
 CLASS="APPLICATION"
 >slon</B
->
-instances.&#13;</P
+> instances.&#13;</P
 ></LI
 ></UL
 >&#13;</P
@@ -181,8 +199,11 @@
 > version that basically wraps a loop around the
 invocation of <B
 CLASS="APPLICATION"
->slon</B
->, restarting any time it falls over&#13;</P
+><A
+HREF="app-slon.html#SLON"
+> slon</A
+></B
+>, restarting any time it falls over</P
 ></LI
 ><LI
 ><P
@@ -198,10 +219,12 @@
 recently.  We have had VPN connections that occasionally fall over
 without signalling the application, so that the <B
 CLASS="APPLICATION"
->slon</B
->
-stops working, but doesn't actually die; this polling addresses that
-issue.&#13;</P
+><A
+HREF="app-slon.html#SLON"
+> slon </A
+></B
+> stops working, but doesn't
+actually die; this polling addresses that issue.&#13;</P
 ></LI
 ></UL
 >&#13;</P
@@ -216,29 +239,27 @@
 CLASS="EMPHASIS"
 >usually</I
 ></SPAN
-> the preferable thing to run.  It was at one point
-not preferable to run it whilst subscribing a very large replication
-set where it is expected to take many hours to do the initial
-<TT
+> the preferable thing to run.  It was at
+one point not preferable to run it whilst subscribing a very large
+replication set where it is expected to take many hours to do the
+initial <TT
 CLASS="COMMAND"
 >COPY SET</TT
->.  The problem that came up in that case was that it
-figured that since it hasn't done a <TT
+>.  The problem that came up in that
+case was that it figured that since it hasn't done a
+<TT
 CLASS="COMMAND"
 >SYNC</TT
-> in 2 hours,
-something was broken requiring restarting slon, thereby restarting the
-<TT
+> in 2 hours, something was broken requiring
+restarting slon, thereby restarting the <TT
 CLASS="COMMAND"
 >COPY SET</TT
-> event.  More recently, the script has been changed
-to detect <TT
+>
+event.  More recently, the script has been changed to detect
+<TT
 CLASS="COMMAND"
 >COPY SET</TT
-> in progress.
-
-
- </P
+> in progress.</P
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
Index: listenpaths.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/listenpaths.html -Ldoc/adminguide/listenpaths.html -u -w -r1.2 -r1.3
--- doc/adminguide/listenpaths.html
+++ doc/adminguide/listenpaths.html
@@ -81,59 +81,152 @@
 NAME="LISTENPATHS"
 >8. Slony Listen Paths</A
 ></H1
+><DIV
+CLASS="NOTE"
+><P
+></P
+><TABLE
+CLASS="NOTE"
+WIDTH="100%"
+BORDER="0"
+><TR
+><TD
+WIDTH="25"
+ALIGN="CENTER"
+VALIGN="TOP"
+><IMG
+SRC="./images/note.gif"
+HSPACE="5"
+ALT="Note"></TD
+><TD
+ALIGN="LEFT"
+VALIGN="TOP"
+><P
+> If you are running version <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> 1.1, it
+should be <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>completely unnecessary</I
+></SPAN
+> to read this section as it
+introduces a way to automatically manage this part of its
+configuration.  For earlier versions, however, it is needful...</P
+></TD
+></TR
+></TABLE
+></DIV
 ><P
 >If you have more than two or three nodes, and any degree of
-usage of cascaded subscribers (_e.g._ - subscribers that are
+usage of cascaded subscribers (<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>e.g.</I
+></SPAN
+> - subscribers that are
 subscribing through a subscriber node), you will have to be fairly
-careful about the configuration of "listen paths" via the Slonik STORE
-LISTEN and DROP LISTEN statements that control the contents of the
+careful about the configuration of <SPAN
+CLASS="QUOTE"
+>"listen paths"</SPAN
+> via the Slonik <TT
+CLASS="COMMAND"
+>STORE
+LISTEN</TT
+> and <TT
+CLASS="COMMAND"
+>DROP LISTEN</TT
+> statements that control the contents of the
 table sl_listen.&#13;</P
 ><P
->The "listener" entries in this table control where each node
-expects to listen in order to get events propagated from other nodes.
-You might think that nodes only need to listen to the "parent" from
-whom they are getting updates, but in reality, they need to be able to
-receive messages from _all_ nodes in order to be able to conclude that
-SYNCs have been received everywhere, and that, therefore, entries in
-sl_log_1 and sl_log_2 have been applied everywhere, and can therefore
-be purged.&#13;</P
+>The <SPAN
+CLASS="QUOTE"
+>"listener"</SPAN
+> entries in this table control where each
+node expects to listen in order to get events propagated from other
+nodes.  You might think that nodes only need to listen to the
+<SPAN
+CLASS="QUOTE"
+>"parent"</SPAN
+> from whom they are getting updates, but in reality,
+they need to be able to receive messages from <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>all</I
+></SPAN
+> nodes in
+order to be able to conclude that SYNCs have been received everywhere,
+and that, therefore, entries in sl_log_1 and sl_log_2 have been
+applied everywhere, and can therefore be purged.  This extra
+communication is needful so <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> is able to shift
+origins to other locations.&#13;</P
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN744"
+NAME="AEN898"
 >8.1. How Listening Can Break</A
 ></H2
 ><P
 >On one occasion, I had a need to drop a subscriber node (#2) and
 recreate it.  That node was the data provider for another subscriber
-(#3) that was, in effect, a "cascaded slave."  Dropping the subscriber
-node initially didn't work, as slonik informed me that there was a
-dependant node.  I repointed the dependant node to the "master" node
-for the subscription set, which, for a while, replicated without
-difficulties.&#13;</P
+(#3) that was, in effect, a <SPAN
+CLASS="QUOTE"
+>"cascaded slave."</SPAN
+> Dropping the
+subscriber node initially didn't work, as <A
+HREF="app-slonik.html#SLONIK"
+><TT
+CLASS="COMMAND"
+>slonik</TT
+> </A
+> informed me that there was a dependant node.
+I repointed the dependant node to the <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> node for the
+subscription set, which, for a while, replicated without difficulties.&#13;</P
 ><P
->I then dropped the subscription on "node 2," and started
-resubscribing it.  That raised the Slony-I <TT
+>I then dropped the subscription on <SPAN
+CLASS="QUOTE"
+>"node 2,"</SPAN
+> and started
+resubscribing it.  That raised the <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>
+<TT
 CLASS="COMMAND"
 >SET_SUBSCRIPTION</TT
->
-event, which started copying tables.  At that point in time, events
-stopped propagating to "node 3," and while it was in perfectly OK
-shape, no events were making it to it.&#13;</P
+> event, which started copying tables.  At
+that point in time, events stopped propagating to <SPAN
+CLASS="QUOTE"
+>"node 3,"</SPAN
+> and
+while it was in perfectly OK shape, no events were making it to it.&#13;</P
 ><P
 >The problem was that node #3 was expecting to receive events
 from node #2, which was busy processing the <TT
 CLASS="COMMAND"
 >SET_SUBSCRIPTION</TT
-> event,
-and was not passing anything else on.&#13;</P
+>
+event, and was not passing anything else on.&#13;</P
 ><P
 >We dropped the listener rules that caused node #3 to listen to
 node 2, replacing them with rules where it expected its events to come
-from node #1 (the "master" provider node for the replication set).  At
-that moment, "as if by magic," node #3 started replicating again, as
+from node #1 (the origin node for the replication set).  At that
+moment, <SPAN
+CLASS="QUOTE"
+>"as if by magic,"</SPAN
+> node #3 started replicating again, as
 it discovered a place to get <TT
 CLASS="COMMAND"
 >SYNC</TT
@@ -144,19 +237,22 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN753"
+NAME="AEN915"
 >8.2. How The Listen Configuration Should Look</A
 ></H2
 ><P
->The simple cases tend to be simple to cope with.  We'll look at
-a fairly complex set of nodes.&#13;</P
+>The simple cases tend to be simple to cope with.  We need to
+instead look at a more complex node configuration.&#13;</P
 ><P
->Consider a set of nodes, 1 thru 6, where 1 is the "master,"
-where 2-4 subscribe directly to the master, and where 5 subscribes to
+>Consider a set of nodes, 1 thru 6, where 1 is the origin, 
+where 2-4 subscribe directly to the origin, and where 5 subscribes to
 2, and 6 subscribes to 5.&#13;</P
 ><P
->Here is a "listener network" that indicates where each node
-should listen for messages coming from each other node:
+>Here is a <SPAN
+CLASS="QUOTE"
+>"listener network"</SPAN
+> that indicates where each
+node should listen for messages coming from each other node:
 
 <TABLE
 BORDER="0"
@@ -240,8 +336,17 @@
 ><P
 >How we read these listen statements is thus...&#13;</P
 ><P
->When on the "receiver" node, look to the "provider" node to
-provide events coming from the "origin" node.&#13;</P
+>When on the <SPAN
+CLASS="QUOTE"
+>"receiver"</SPAN
+> node, look to the <SPAN
+CLASS="QUOTE"
+>"provider"</SPAN
+>
+node to provide events coming from the <SPAN
+CLASS="QUOTE"
+>"origin"</SPAN
+> node.&#13;</P
 ><P
 >The tool <TT
 CLASS="FILENAME"
@@ -251,9 +356,19 @@
 >altperl</TT
 >
 scripts produces optimized listener networks in both the tabular form
-shown above as well as in the form of Slonik statements.&#13;</P
+shown above as well as in the form of <A
+HREF="app-slonik.html#SLONIK"
+><B
+CLASS="APPLICATION"
+>slonik</B
+> </A
+> statements.&#13;</P
 ><P
->There are three "thorns" in this set of roses:
+>There are three <SPAN
+CLASS="QUOTE"
+>"thorns"</SPAN
+> in this set of roses:
+
 <P
 ></P
 ><UL
@@ -262,8 +377,14 @@
 > If you change the shape of the node set, so that the
 nodes subscribe differently to things, you need to drop sl_listen
 entries and create new ones to indicate the new preferred paths
-between nodes.  There is no automated way at this point to do this
-"reshaping."&#13;</P
+between nodes.  Until <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>, there is no automated way
+at this point to do this <SPAN
+CLASS="QUOTE"
+>"reshaping."</SPAN
+>&#13;</P
 ></LI
 ><LI
 ><P
@@ -276,7 +397,10 @@
 > change the sl_listen entries,
 events will likely continue to propagate so long as all of the nodes
 continue to run well.  The problem will only be noticed when a node is
-taken down, "orphaning" any nodes that are listening through it.&#13;</P
+taken down, <SPAN
+CLASS="QUOTE"
+>"orphaning"</SPAN
+> any nodes that are listening through it.&#13;</P
 ></LI
 ><LI
 ><P
@@ -287,8 +411,12 @@
 CLASS="EMPHASIS"
 >different</I
 ></SPAN
-> shapes for their respective trees of subscribers.  There
-won't be a single "best" listener configuration in that case.&#13;</P
+> shapes for their respective trees of subscribers.
+There won't be a single <SPAN
+CLASS="QUOTE"
+>"best"</SPAN
+> listener configuration in that
+case.&#13;</P
 ></LI
 ><LI
 ><P
@@ -301,10 +429,16 @@
 ></SPAN
 > be a series of sl_path entries connecting the origin
 to the receiver.  This means that if the contents of sl_path do not
-express a "connected" network of nodes, then some nodes will not be
-reachable.  This would typically happen, in practice, when you have
+express a <SPAN
+CLASS="QUOTE"
+>"connected"</SPAN
+> network of nodes, then some nodes will not
+be reachable.  This would typically happen, in practice, when you have
 two sets of nodes, one in one subnet, and another in another subnet,
-where there are only a couple of "firewall" nodes that can talk
+where there are only a couple of <SPAN
+CLASS="QUOTE"
+>"firewall"</SPAN
+> nodes that can talk
 between the subnets.  Cut out those nodes and the subnets stop
 communicating.&#13;</P
 ></LI
@@ -316,62 +450,16 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN783"
->8.3. Open Question</A
-></H2
-><P
->I am not certain what happens if you have multiple listen path
-entries for one path, that is, if you set up entries allowing a node
-to listen to multiple receivers to get events from a particular
-origin.  Further commentary on that would be appreciated!
-
-<DIV
-CLASS="NOTE"
-><P
-></P
-><TABLE
-CLASS="NOTE"
-WIDTH="100%"
-BORDER="0"
-><TR
-><TD
-WIDTH="25"
-ALIGN="CENTER"
-VALIGN="TOP"
-><IMG
-SRC="./images/note.gif"
-HSPACE="5"
-ALT="Note"></TD
-><TD
-ALIGN="LEFT"
-VALIGN="TOP"
-><P
-> Actually, I do have answers to this; the remainder of
-this document should be re-presented based on the fact that Slony-I
-1.1 will include a "heuristic" to generate the listener paths
-automatically. </P
-></TD
-></TR
-></TABLE
-></DIV
->&#13;</P
-></DIV
-><DIV
-CLASS="SECT2"
-><H2
-CLASS="SECT2"
-><A
-NAME="AEN788"
->8.4. Generating listener entries via heuristics</A
+NAME="AEN958"
+>8.3. Automated Listen Path Generation</A
 ></H2
 ><P
->It ought to be possible to generate sl_listen entries
-dynamically, based on the following heuristics.  Hopefully this will
-take place in version 1.1, eliminating the need to configure this by
-hand.&#13;</P
-><P
->Configuration will (tentatively) be controlled based on two data
-sources:
+> In <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> version 1.1, a heuristic scheme is
+introduced to automatically generate listener entries.  This happens,
+in order, based on three data sources:
 
 <P
 ></P
@@ -379,32 +467,41 @@
 ><LI
 ><P
 > sl_subscribe entries are the first, most vital
-control as to what listens to what; we know there must be a "listen"
-entry for a subscriber node to listen to its provider for events from
-the provider, and there should be direct "listening" taking place
-between subscriber and provider.&#13;</P
+control as to what listens to what; we <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>know</I
+></SPAN
+> there must be a
+direct path between each subscriber node and its provider.&#13;</P
 ></LI
 ><LI
 ><P
 > sl_path entries are the second indicator; if
-sl_subscribe has not already indicated "how to listen," then a node
-may listen directly to the event's origin if there is a suitable
-sl_path entry&#13;</P
+sl_subscribe has not already indicated <SPAN
+CLASS="QUOTE"
+>"how to listen,"</SPAN
+> then a
+node may listen directly to the event's origin if there is a suitable
+sl_path entry.&#13;</P
 ></LI
 ><LI
 ><P
-> If there is no guidance thus far based on the above
-data sources, then nodes can listen indirectly if there is an sl_path
-entry that points to a suitable sl_listen entry...&#13;</P
+> Lastly, if there has been no guidance thus far based
+on the above data sources, then nodes can listen indirectly via every
+node that is either a provider for the receiver, or that is using the
+receiver as a provider.&#13;</P
 ></LI
 ></UL
 >&#13;</P
 ><P
-> A stored procedure would run on each node, rewriting sl_listen
-each time sl_subscribe or sl_path are modified.
-
-
- </P
+> Any time sl_subscribe or sl_path are modified,
+<CODE
+CLASS="FUNCTION"
+>RebuildListenEntries()</CODE
+> will be called to revise
+the listener paths.</P
 ></DIV
 ></DIV
 ><DIV
Index: dropthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/dropthings.sgml
+++ doc/adminguide/dropthings.sgml
@@ -1,62 +1,80 @@
 <sect1 id="dropthings"> <title/ Dropping things from Slony Replication/
 
-<para>There are several things you might want to do involving dropping things from Slony-I replication.
+<para>There are several things you might want to do involving dropping
+things from <productname/Slony-I/ replication.
 
 <sect2><title/ Dropping A Whole Node/
 
-<para>If you wish to drop an entire node from replication, the Slonik command DROP NODE should do the trick.  
-
-<para>This will lead to Slony-I dropping the triggers (generally that deny the ability to update data), restoring the "native" triggers, dropping the schema used by Slony-I, and the slon process for that node terminating itself.
-
-<para>As a result, the database should be available for whatever use your application makes of the database.
-
-<para>This is a pretty major operation, with considerable potential to cause substantial destruction; make sure you drop the right node!
-
-<para>The operation will fail if there are any nodes subscribing to the node that you attempt to drop, so there is a bit of failsafe.
-
-<para>SlonyFAQ17 documents some extra maintenance that may need to be done on sl_confirm if you are running versions prior to 1.0.5.
+<para>If you wish to drop an entire node from replication, the <link
+linkend="slonik"> slonik </link> command <command/DROP NODE/ should do
+the trick.
+
+<para>This will lead to <productname/Slony-I/ dropping the triggers
+(generally that deny the ability to update data), restoring the
+"native" triggers, dropping the schema used by <productname/Slony-I/,
+and the slon process for that node terminating itself.
+
+<para>As a result, the database should be available for whatever use
+your application makes of the database.
+
+<para>This is a pretty major operation, with considerable potential to
+cause substantial destruction; make sure you drop the right node!
+
+<para>The operation will fail if there are any nodes subscribing to
+the node that you attempt to drop, so there is a bit of a failsafe to
+protect you from errors.
+
+<para><link linkend="FAQ17"> sl_log_1 isn't getting purged </link>
+documents some extra maintenance that may need to be done on
+sl_confirm if you are running versions prior to 1.0.5.
 
 <sect2><title/ Dropping An Entire Set/
 
 <para>If you wish to stop replicating a particular replication set,
-the Slonik command <command/DROP SET/ is what you need to use.
+the <link linkend="slonik"> slonik </link> command <command/DROP SET/
+is what you need to use.
 
-<para>Much as with <command/DROP NODE/, this leads to Slony-I dropping
-the Slony-I triggers on the tables and restoring <quote/native/
-triggers.  One difference is that this takes place on <emphasis/all/
-nodes in the cluster, rather than on just one node.  Another
-difference is that this does not clear out the Slony-I cluster's
-namespace, as there might be other sets being serviced.
+<para>Much as with <command/DROP NODE/, this leads to
+<productname/Slony-I/ dropping the <productname/Slony-I/ triggers on
+the tables and restoring <quote/native/ triggers.  One difference is
+that this takes place on <emphasis/all/ nodes in the cluster, rather
+than on just one node.  Another difference is that this does not clear
+out the <productname/Slony-I/ cluster's namespace, as there might be
+other sets being serviced.
 
 <para>This operation is quite a bit more dangerous than <command/DROP
-NODE/, as there <emphasis/isn't/ the same sort of "failsafe."  If you
-tell <command/DROP SET/ to drop the <emphasis/wrong/ set, there isn't
-anything to prevent "unfortunate results."
+NODE/, as there <emphasis/isn't/ the same sort of <quote/failsafe./ If
+you tell <command/DROP SET/ to drop the <emphasis/wrong/ set, there
+isn't anything to prevent potentially career-limiting
+<quote/unfortunate results./ Handle with care...
 
 <sect2><title/ Unsubscribing One Node From One Set/
 
 <para>The <command/UNSUBSCRIBE SET/ operation is a little less
 invasive than either <command/DROP SET/ or <command/DROP NODE/; it
-involves dropping Slony-I triggers and restoring "native" triggers on
-one node, for one replication set.
+involves dropping <productname/Slony-I/ triggers and restoring
+"native" triggers on one node, for one replication set.
 
-<para>Much like with <command/DROP NODE/, this operation will fail if there is a node subscribing to the set on this node. 
+<para>Much like with <command/DROP NODE/, this operation will fail if
+there is a node subscribing to the set on this node.
 
 <warning>
 <para>For all of the above operations, <quote/turning replication back
 on/ will require that the node copy in a <emphasis/full/ fresh set of
 the data on a provider.  The fact that the data was recently being
-replicated isn't good enough; Slony-I will expect to refresh the data
-from scratch.
+replicated isn't good enough; <productname/Slony-I/ will expect to
+refresh the data from scratch.
 </warning>
 
 <sect2><title/ Dropping A Table From A Set/
 
-<para>In Slony 1.0.5 and above, there is a Slonik command <command/SET
-DROP TABLE/ that allows dropping a single table from replication
-without forcing the user to drop the entire replication set.
+<para>In <productname/Slony-I/ 1.0.5 and above, there is a Slonik
+command <command/SET DROP TABLE/ that allows dropping a single table
+from replication without forcing the user to drop the entire
+replication set.
 
-<para>If you are running an earlier version, there is a <quote/hack/ to do this:
+<para>If you are running an earlier version, there is a <quote/hack/
+to do this:
 
 <para>You can fiddle this by hand by finding the table ID for the
 table you want to get rid of, which you can find in sl_table, and then
@@ -68,16 +86,18 @@
   delete from _slonyschema.sl_table where tab_id = 40;
 </programlisting>
 
-<para>The schema will obviously depend on how you defined the Slony-I
-cluster.  The table ID, in this case, 40, will need to change to the
-ID of the table you want to have go away.
+<para>The schema will obviously depend on how you defined the
+<productname/Slony-I/ cluster.  The table ID, in this case, 40, will
+need to change to the ID of the table you want to have go away.
 
 <para>You'll have to run these three queries on all of the nodes,
-preferably firstly on the "master" node, so that the dropping of this
-propagates properly.  Implementing this via a Slonik statement with a
-new Slony event would do that.  Submitting the three queries using
-EXECUTE SCRIPT could do that.  Also possible would be to connect to
-each database and submit the queries by hand.
+preferably firstly on the origin node, so that the dropping of this
+propagates properly.  Implementing this via a <link linkend="slonik">
+slonik </link> statement with a new <productname/Slony-I/ event would
+do that.  Submitting the three queries using <command/EXECUTE SCRIPT/
+could do that; see <link linkend="ddlchanges"> Database Schema Changes
+</link> for more details.  Also possible would be to connect to each
+database and submit the queries by hand.
 
 <sect2><title/ Dropping A Sequence From A Set/
 
@@ -87,9 +107,9 @@
 <para>If you are running an earlier version, here are instructions as
 to how to drop sequences:
 
-<para>The data that needs to be deleted to stop Slony from continuing
-to replicate the two sequences identified with Sequence IDs 93 and 59
-are thus:
+<para>The data that needs to be deleted to stop <productname/Slony-I/
+from continuing to replicate the two sequences identified with
+Sequence IDs 93 and 59 are thus:
 
 <programlisting>
 delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
@@ -97,9 +117,10 @@
 </programlisting>
 
 <para> Those two queries could be submitted to all of the nodes via
-<function/ddlscript()/ / <command/EXECUTE SCRIPT/, thus eliminating
-the sequence everywhere "at once."  Or they may be applied by hand to
-each of the nodes.
+<function>ddlscript()</function> / <command>EXECUTE SCRIPT</command>,
+thus eliminating the sequence everywhere <quote>at once.</quote> Or
+they may be applied by hand to each of the nodes.</para>
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
--- /dev/null
+++ doc/adminguide/definingsets.html
@@ -0,0 +1,358 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
+<HTML
+><HEAD
+><TITLE
+>Defining Slony-I Replication
+Sets</TITLE
+><META
+NAME="GENERATOR"
+CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
+REV="MADE"
+HREF="mailto:cbbrowne at gmail.com"><LINK
+REL="HOME"
+TITLE="Slony-I 1.1 Administration"
+HREF="slony.html"><LINK
+REL="UP"
+HREF="slonyintro.html"><LINK
+REL="PREVIOUS"
+TITLE="Defining Slony-I Clusters"
+HREF="cluster.html"><LINK
+REL="NEXT"
+TITLE="Slony-I Commands"
+HREF="slony-commands.html"><LINK
+REL="STYLESHEET"
+TYPE="text/css"
+HREF="stdstyle.css"><META
+HTTP-EQUIV="Content-Type"></HEAD
+><BODY
+CLASS="SECT1"
+BGCOLOR="#FFFFFF"
+TEXT="#000000"
+LINK="#0000FF"
+VLINK="#840084"
+ALINK="#0000FF"
+><DIV
+CLASS="NAVHEADER"
+><TABLE
+SUMMARY="Header navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TH
+COLSPAN="3"
+ALIGN="center"
+>Slony-I 1.1 Administration</TH
+></TR
+><TR
+><TD
+WIDTH="10%"
+ALIGN="left"
+VALIGN="bottom"
+><A
+HREF="cluster.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="80%"
+ALIGN="center"
+VALIGN="bottom"
+></TD
+><TD
+WIDTH="10%"
+ALIGN="right"
+VALIGN="bottom"
+><A
+HREF="slony-commands.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+></TABLE
+><HR
+ALIGN="LEFT"
+WIDTH="100%"></DIV
+><DIV
+CLASS="SECT1"
+><H1
+CLASS="SECT1"
+><A
+NAME="DEFININGSETS"
+>6. Defining Slony-I Replication
+Sets</A
+></H1
+><P
+>Defining the nodes indicated the shape of the cluster of
+database servers; it is now time to determine what data is to be
+copied between them.  The groups of data that are copied are defined
+as <SPAN
+CLASS="QUOTE"
+>"sets."</SPAN
+>&#13;</P
+><P
+>A replication set consists of the following:
+<P
+></P
+><UL
+><LI
+><P
+> Keys on tables that are to be replicated that have no
+suitable primary key</P
+></LI
+><LI
+><P
+> Tables that are to be replicated</P
+></LI
+><LI
+><P
+> Sequences that are to be replicated</P
+></LI
+></UL
+>&#13;</P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN297"
+>6.1. Primary Keys</A
+></H2
+><P
+><SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>needs</I
+></SPAN
+> to have a
+primary key or candidate thereof on each table that is replicated.  PK
+values are used as the primary identifier for each tuple that is
+modified in the source system.  There are three ways that you can get
+<SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> to use a primary key:
+
+<P
+></P
+><UL
+><LI
+><P
+> If the table has a formally identified primary key,
+<TT
+CLASS="COMMAND"
+>SET ADD TABLE</TT
+> can be used without any need to
+reference the primary key.  <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> will pick up that
+there is a primary key, and use it.&#13;</P
+></LI
+><LI
+><P
+> If the table hasn't got a primary key, but has some
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>candidate</I
+></SPAN
+> primary key, that is, some index on a
+combination of fields that is UNIQUE and NOT NULL, then you can
+specify the key, as in
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="90%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    SET ADD TABLE (set id = 1, origin = 1, id = 42, 
+                   full qualified name = 'public.this_table', 
+                   key = 'this_by_that', 
+         comment='this_table has this_by_that as a candidate primary key');</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
+><P
+> Notice that while you need to specify the namespace for the
+table, you must <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> specify the namespace for the
+key, as it infers the namespace from the table.&#13;</P
+></LI
+><LI
+><P
+> If the table hasn't even got a candidate primary key,
+you can ask <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> to provide one.  This is done by
+first using <TT
+CLASS="COMMAND"
+>TABLE ADD KEY</TT
+> to add a column populated
+using a <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> sequence, and then having the
+<TT
+CLASS="COMMAND"
+>SET ADD TABLE</TT
+> include the directive
+<CODE
+CLASS="OPTION"
+>key=serial</CODE
+>, to indicate that <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+>'s
+own column should be used.</P
+></LI
+></UL
+>&#13;</P
+><P
+> It is not terribly important whether you pick a
+<SPAN
+CLASS="QUOTE"
+>"true"</SPAN
+> primary key or a mere <SPAN
+CLASS="QUOTE"
+>"candidate primary
+key;"</SPAN
+> it is, however, recommended that you have one of those
+instead of having <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> populate the PK column for
+you.  If you don't have a suitable primary key, that means that the
+table hasn't got any mechanism from your application's standpoint of
+keeping values unique.  <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> may therefore introduce
+a new failure mode for your application, and this implies that you had
+a way to enter confusing data into the database.</P
+></DIV
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN327"
+>6.2. Grouping tables into sets</A
+></H2
+><P
+> It will be vital to group tables together into a single set if
+those tables are related via foreign key constraints.  If tables that
+are thus related are <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> replicated together,
+you'll find yourself in trouble if you switch the <SPAN
+CLASS="QUOTE"
+>"master
+provider"</SPAN
+> from one node to another, and discover that the new
+<SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> can't be updated properly because it is missing
+the contents of dependent tables.</P
+><P
+> If a database schema has been designed cleanly, it is likely
+that replication sets will be virtually synonymous with namespaces.
+All of the tables and sequences in a particular namespace will be
+sufficiently related that you will want to replicate them all.
+Conversely, tables found in different schemas will likely
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>not</I
+></SPAN
+> be related, and therefore should be replicated in
+separate sets.</P
+></DIV
+></DIV
+><DIV
+CLASS="NAVFOOTER"
+><HR
+ALIGN="LEFT"
+WIDTH="100%"><TABLE
+SUMMARY="Footer navigation table"
+WIDTH="100%"
+BORDER="0"
+CELLPADDING="0"
+CELLSPACING="0"
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+><A
+HREF="cluster.html"
+ACCESSKEY="P"
+>Prev</A
+></TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slony.html"
+ACCESSKEY="H"
+>Home</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+><A
+HREF="slony-commands.html"
+ACCESSKEY="N"
+>Next</A
+></TD
+></TR
+><TR
+><TD
+WIDTH="33%"
+ALIGN="left"
+VALIGN="top"
+>Defining Slony-I Clusters</TD
+><TD
+WIDTH="34%"
+ALIGN="center"
+VALIGN="top"
+><A
+HREF="slonyintro.html"
+ACCESSKEY="U"
+>Up</A
+></TD
+><TD
+WIDTH="33%"
+ALIGN="right"
+VALIGN="top"
+>Slony-I Commands</TD
+></TR
+></TABLE
+></DIV
+></BODY
+></HTML
+>
\ No newline at end of file


More information about the Slony1-commit mailing list