CVS User Account cvsuser
Mon Dec 13 23:37:02 PST 2004
Log Message:
-----------
Plenty of updates to documentation; consolidated in man pages for slon/slonik

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.html (r1.1 -> r1.2)
        cluster.html (r1.1 -> r1.2)
        concepts.html (r1.1 -> r1.2)
        ddlchanges.html (r1.1 -> r1.2)
        defineset.sgml (r1.3 -> r1.4)
        dropthings.html (r1.1 -> r1.2)
        failover.html (r1.1 -> r1.2)
        faq.html (r1.1 -> r1.2)
        faq.sgml (r1.1 -> r1.2)
        filelist.sgml (r1.2 -> r1.3)
        firstdb.html (r1.1 -> r1.2)
        firstdb.sgml (r1.3 -> r1.4)
        help.html (r1.1 -> r1.2)
        help.sgml (r1.3 -> r1.4)
        installation.html (r1.1 -> r1.2)
        listenpaths.html (r1.1 -> r1.2)
        maintenance.html (r1.1 -> r1.2)
        monitoring.html (r1.1 -> r1.2)
        requirements.html (r1.1 -> r1.2)
        reshape.html (r1.1 -> r1.2)
        slonik.html (r1.1 -> r1.2)
        slonik.sgml (r1.2 -> r1.3)
        slonstart.html (r1.1 -> r1.2)
        slony.html (r1.1 -> r1.2)
        slony.sgml (r1.2 -> r1.3)
        subscribenodes.html (r1.1 -> r1.2)

Added Files:
-----------
    slony1-engine/doc/adminguide:
        bookindex.sgml (r1.1)
        reference.sgml (r1.1)
        slon.sgml (r1.1)
        slonik_ref.sgml (r1.1)

Removed Files:
-------------
    slony1-engine/doc/adminguide:
        slonconfig.sgml

-------------- next part --------------
Index: reshape.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/reshape.html -Ldoc/adminguide/reshape.html -u -w -r1.1 -r1.2
--- doc/adminguide/reshape.html
+++ doc/adminguide/reshape.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE="Slony-I Maintenance"
 HREF="maintenance.html"><LINK
@@ -79,10 +79,11 @@
 CLASS="SECT1"
 ><A
 NAME="RESHAPE"
->14. Reshaping a Cluster</A
+>6. Reshaping a Cluster</A
 ></H1
 ><P
->If you rearrange the nodes so that they serve different purposes, this will likely lead to the subscribers changing a bit.&#13;</P
+>If you rearrange the nodes so that they serve different
+purposes, this will likely lead to the subscribers changing a bit.&#13;</P
 ><P
 >This will require doing several things:
 <P
@@ -90,20 +91,39 @@
 ><UL
 ><LI
 ><P
-> If you want a node that is a subscriber to become the "master" provider for a particular replication set, you will have to issue the slonik MOVE SET operation to change that "master" provider node.&#13;</P
+> If you want a node that is a subscriber to become the
+"master" provider for a particular replication set, you will have to
+issue the slonik MOVE SET operation to change that "master" provider
+node.&#13;</P
 ></LI
 ><LI
 ><P
-> You may subsequently, or instead, wish to modify the subscriptions of other nodes.  You might want to modify a node to get its data from a different provider, or to change it to turn forwarding on or off.  This can be accomplished by issuing the slonik SUBSCRIBE SET operation with the new subscription information for the node; Slony-I will change the configuration.&#13;</P
+> You may subsequently, or instead, wish to modify the
+subscriptions of other nodes.  You might want to modify a node to get
+its data from a different provider, or to change it to turn forwarding
+on or off.  This can be accomplished by issuing the slonik SUBSCRIBE
+SET operation with the new subscription information for the node;
+Slony-I will change the configuration.&#13;</P
 ></LI
 ><LI
 ><P
-> If the directions of data flows have changed, it is doubtless appropriate to issue a set of DROP LISTEN operations to drop out obsolete paths between nodes and SET LISTEN to add the new ones.  At present, this is not changed automatically; at some point, MOVE SET and SUBSCRIBE SET might change the paths as a side-effect.  See SlonyListenPaths for more information about this.  In version 1.1 and later, it is likely that the generation of sl_listen entries will be entirely automated, where they will be regenerated when changes are made to sl_path or sl_subscribe, thereby making it unnecessary to even think about SET LISTEN.&#13;</P
+> If the directions of data flows have changed, it is
+doubtless appropriate to issue a set of DROP LISTEN operations to drop
+out obsolete paths between nodes and SET LISTEN to add the new ones.
+At present, this is not changed automatically; at some point, MOVE SET
+and SUBSCRIBE SET might change the paths as a side-effect.  See
+SlonyListenPaths for more information about this.  In version 1.1 and
+later, it is likely that the generation of sl_listen entries will be
+entirely automated, where they will be regenerated when changes are
+made to sl_path or sl_subscribe, thereby making it unnecessary to even
+think about SET LISTEN.&#13;</P
 ></LI
 ></UL
-></P
+>&#13;</P
 ><P
-> The "altperl" toolset includes a "init_cluster.pl" script that is quite up to the task of creating the new SET LISTEN commands; it isn't smart enough to know what listener paths should be dropped.
+> The "altperl" toolset includes a "init_cluster.pl" script that
+is quite up to the task of creating the new SET LISTEN commands; it
+isn't smart enough to know what listener paths should be dropped.
 
  </P
 ></DIV
@@ -157,7 +177,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: requirements.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/requirements.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/requirements.html -Ldoc/adminguide/requirements.html -u -w -r1.1 -r1.2
--- doc/adminguide/requirements.html
+++ doc/adminguide/requirements.html
@@ -12,9 +12,9 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyintro.html"><LINK
 REL="PREVIOUS"
-HREF="t24.html"><LINK
+HREF="slonyintro.html"><LINK
 REL="NEXT"
 TITLE=" Slony-I Installation"
 HREF="installation.html"><LINK
@@ -49,7 +49,7 @@
 ALIGN="left"
 VALIGN="bottom"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -431,7 +431,7 @@
 ALIGN="left"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -465,7 +465,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -1,9 +1,10 @@
-<sect1> <title>Defining Slony-I Replication Sets</title>
+<sect1 id="definingsets"> <title>Defining Slony-I Replication
+Sets</title>
 
 <para>Defining the nodes indicated the shape of the cluster of
 database servers; it is now time to determine what data is to be
 copied between them.  The groups of data that are copied are defined
-as "sets."
+as <quote>sets.</quote>
 
 <para>A replication set consists of the following:
 <itemizedlist>
@@ -27,8 +28,8 @@
 <itemizedlist>
 
 <listitem><para> If the table has a formally identified primary key,
-<command/SET ADD TABLE/ can be used without any need to reference the
-primary key.  <application/Slony-I/ will pick up that there is a
+<command>SET ADD TABLE</command> can be used without any need to reference the
+primary key.  <application>Slony-I</application> will pick up that there is a
 primary key, and use it.
 
 <listitem><para> If the table hasn't got a primary key, but has some
@@ -49,10 +50,11 @@
 
 <listitem><para> If the table hasn't even got a candidate primary key,
 you can ask Slony-I to provide one.  This is done by first using
-<command/TABLE ADD KEY/ to add a column populated using a Slony-I
-sequence, and then having the <command/SET ADD TABLE/ include the
-directive <option/key=serial/, to indicate that
-<application/Slony-I/'s own column should be used.</para></listitem>
+<command>TABLE ADD KEY</command> to add a column populated using a
+Slony-I sequence, and then having the <command>SET ADD TABLE</command>
+include the directive <option>key=serial</option>, to indicate that
+<application>Slony-I</application>'s own column should be
+used.</para></listitem>
 
 </itemizedlist>
 
@@ -64,24 +66,24 @@
 got any mechanism from your application's standpoint of keeping values
 unique.  Slony-I may therefore introduce a new failure mode for your
 application, and this implies that you had a way to enter confusing
-data into the database.
+data into the database.</para>
 
 <sect2><title> Grouping tables into sets</title>
 
 <para> It will be vital to group tables together into a single set if
 those tables are related via foreign key constraints.  If tables that
 are thus related are <emphasis>not</emphasis> replicated together,
-you'll find yourself in trouble if you switch the <quote/master
-provider/ from one node to another, and discover that the new
-<quote/master/ can't be updated properly because it is missing the
-contents of dependent tables.
+you'll find yourself in trouble if you switch the <quote>master
+provider</quote> from one node to another, and discover that the new
+<quote>master</quote> can't be updated properly because it is missing the
+contents of dependent tables.</para>
 
 <para> If a database schema has been designed cleanly, it is likely
 that replication sets will be virtually synonymous with namespaces.
 All of the tables and sequences in a particular namespace will be
 sufficiently related that you will want to replicate them all.
 Conversely, tables found in different schemas will likely NOT be
-related, and therefore should be replicated in separate sets.
+related, and therefore should be replicated in separate sets.</para>
 
 <!-- Keep this comment at the end of the file
 Local variables:
--- /dev/null
+++ doc/adminguide/slon.sgml
@@ -0,0 +1,180 @@
+<refentry id="slon">
+<refmeta>
+    <refentrytitle id="app-slon-title"><application>slon</application></refentrytitle>
+    <manvolnum>1</manvolnum>
+    <refmiscinfo>Application</refmiscinfo>
+  </refmeta>
+
+  <refnamediv>
+    <refname><application>slon</application></refname>
+    <refpurpose>
+      <productname>Slony-I</productname> daemon
+    </refpurpose>
+  </refnamediv>
+
+ <indexterm zone="slon">
+  <primary>slon</primary>
+ </indexterm>
+
+ <refsynopsisdiv>
+  <cmdsynopsis>
+   <command>slon</command>
+   <arg rep="repeat"><replaceable class="parameter">option</replaceable></arg>
+   <arg><replaceable class="parameter">clustername</replaceable>
+   <arg><replaceable class="parameter">conninfo</replaceable></arg></arg>
+  </cmdsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+  <title>Description</title>
+
+    <para>
+     <application>slon</application> is the daemon application that
+     <quote>runs</quote> <productname>Slony-I</productname>
+     replication.  A <application>slon</application> instance must be
+     run for each node in a <productname>Slony-I</productname>
+     cluster.
+    </para>
+ </refsect1>
+
+ <refsect1 id="R1-APP-SLON-3">
+  <title>Options</title>
+
+  <variablelist>
+    <varlistentry>
+      <term><option>-d <replaceable class="parameter">debuglevel</replaceable></></term>
+      <listitem>
+      <para>
+      Specifies the level of verbosity that <application>slon</application> should
+      use when logging its activity.
+      </para>
+     <para>The eight levels of logging are:
+      <itemizedlist>
+       <listitem><Para>Error
+       <listitem><Para>Warn
+       <listitem><Para>Config
+       <listitem><Para>Info
+       <listitem><Para>Debug1
+       <listitem><Para>Debug2
+       <listitem><Para>Debug3
+       <listitem><Para>Debug4
+      </itemizedlist>
+    </listitem>
+    </varlistentry>
+
+    <varlistentry>
+    <term><option>-s <replaceable class="parameter">SYNC check interval</replaceable></></term>
+    <listitem>
+     <para>
+      Specifies the interval, in milliseconds, in which
+      <application/slon/ should add a SYNC even if none has been
+      mandated by data creation.  Default is 10000 ms.
+     </para>
+     
+     <para>Short sync times keep the master on a <quote/short leash,/
+      updating the slaves more frequently.  If you have replicated
+      sequences that are frequently updated <emphasis/without/ there
+      being tables that are affected, this keeps there from being times
+      when only sequences are updated, and therefore <emphasis/no/
+      syncs take place.
+    </listitem>
+   </varlistentry>
+
+    <varlistentry>
+      <term><option>-t <replaceable class="parameter">SYNC interval timeout</replaceable></></term>
+      <listitem>
+      <para>
+      Default is 60000 ms.
+      </para>
+      </listitem>
+    </varlistentry>
+
+    <varlistentry>
+    <term><option>-g <replaceable class="parameter">group size</replaceable></></term>
+    <listitem>
+     <para>
+      Maximum SYNC group size; defaults to 6.  Thus, if a particular
+      node is behind by 200 SYNCs, it will try to group them together
+      into groups of 6.  This would be expected to reduce transaction
+      overhead due to having fewer transactions to <command>COMMIT</command>.
+     </para>
+     
+     <para>The default of 6 is probably suitable for small systems
+      that can devote only very limited bits of memory to slon.  If you
+      have plenty of memory, it would be reasonable to increase this,
+      as it will increase the amount of work done in each transaction,
+      and will allow a subscriber that is behind by a lot to catch up
+      more quickly.</para>
+     
+     <para>Slon processes usually stay pretty small; even with large
+      value for this option, slon would be expected to only grow to a
+      few MB in size.</para>
+     
+    </listitem>
+   </varlistentry>
+
+    <varlistentry>
+    <term><option>-c <replaceable class="parameter">cleanup cycles</replaceable></></term>
+    <listitem>
+     <para>
+      How often to <command>VACUUM</command> in cleanup cycles.
+     </para>
+     
+     <para>Set this to zero to disable slon-initiated vacuuming.  If
+      you are using something like
+      <application>pg_autovacuum</application> to initiate vacuums, you
+      may not need for slon to initiate vacuums itself.  If you are
+      not, there are some tables Slony-I uses that collect a
+      <emphasis>lot</emphasis> of dead tuples that should be vacuumed
+      frequently.</para>
+
+    </listitem>
+   </varlistentry>
+   
+    <varlistentry>
+      <term><option>-p <replaceable class="parameter">PID filename</replaceable></></term>
+      <listitem>
+      <para>
+      PID filename.
+      </para>
+      </listitem>
+    </varlistentry>
+
+    <varlistentry>
+      <term><option>-f <replaceable class="parameter">config file</replaceable></></term>
+      <listitem><para>
+      File containing <application>slon</application> configuration.
+      </para>
+      </listitem>
+    </varlistentry>
+
+  </variablelist>
+ </refsect1>
+
+
+ <refsect1>
+  <title>Exit Status</title>
+
+  <para>
+   <application>slon</application> returns 0 to the shell if it
+   finished normally.  It returns -1 if it encounters any fatal error.
+  </para>
+ </refsect1>
+</refentry>
+
+
+<!-- Keep this comment at the end of the file
+Local variables:
+mode: sgml
+sgml-omittag:nil
+sgml-shorttag:t
+sgml-minimize-attributes:nil
+sgml-always-quote-attributes:t
+sgml-indent-step:1
+sgml-indent-data:t
+sgml-parent-document:"slony.sgml"
+sgml-exposed-tags:nil
+sgml-local-catalogs:"/usr/lib/sgml/catalog"
+sgml-local-ecat-files:nil
+End:
+-->
Index: maintenance.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/maintenance.html -Ldoc/adminguide/maintenance.html -u -w -r1.1 -r1.2
--- doc/adminguide/maintenance.html
+++ doc/adminguide/maintenance.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE="Monitoring"
 HREF="monitoring.html"><LINK
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="MAINTENANCE"
->13. Slony-I Maintenance</A
+>5. Slony-I Maintenance</A
 ></H1
 ><P
 >Slony-I actually does most of its necessary maintenance itself, in a "cleanup" thread:
@@ -89,41 +89,46 @@
 ><UL
 ><LI
 ><P
-> Deletes old data from various tables in the
-	Slony-I cluster's namespace, notably entries in sl_log_1,
-	sl_log_2 (not yet used), and sl_seqlog.
-
-	</P
+> Deletes old data from various tables in the Slony-I
+cluster's namespace, notably entries in sl_log_1, sl_log_2 (not yet
+used), and sl_seqlog.&#13;</P
 ></LI
 ><LI
 ><P
-> Vacuum certain tables used by Slony-I.  As of
-	1.0.5, this includes pg_listener; in earlier versions, you
-	must vacuum that table heavily, otherwise you'll find
-	replication slowing down because Slony-I raises plenty of
-	events, which leads to that table having plenty of dead
-	tuples.
-
-	</P
-><P
-> In some versions (1.1, for sure; possibly 1.0.5) there is the option of not bothering to vacuum any of these tables if you are using something like pg_autovacuum to handle vacuuming of these tables.  Unfortunately, it has been quite possible for pg_autovacuum to not vacuum quite frequently enough, so you probably want to use the internal vacuums.  Vacuuming pg_listener "too often" isn't nearly as hazardous as not vacuuming it frequently enough.
-
-	</P
-><P
->Unfortunately, if you have long-running transactions, vacuums cannot clear out dead tuples that are newer than the eldest transaction that is still running.  This will most notably lead to pg_listener growing large and will slow replication.&#13;</P
+> Vacuum certain tables used by Slony-I.  As of 1.0.5,
+this includes pg_listener; in earlier versions, you must vacuum that
+table heavily, otherwise you'll find replication slowing down because
+Slony-I raises plenty of events, which leads to that table having
+plenty of dead tuples.&#13;</P
+><P
+> In some versions (1.1, for sure; possibly 1.0.5) there is the
+option of not bothering to vacuum any of these tables if you are using
+something like pg_autovacuum to handle vacuuming of these tables.
+Unfortunately, it has been quite possible for pg_autovacuum to not
+vacuum quite frequently enough, so you probably want to use the
+internal vacuums.  Vacuuming pg_listener "too often" isn't nearly as
+hazardous as not vacuuming it frequently enough.&#13;</P
+><P
+>Unfortunately, if you have long-running transactions, vacuums
+cannot clear out dead tuples that are newer than the eldest
+transaction that is still running.  This will most notably lead to
+pg_listener growing large and will slow replication.&#13;</P
 ></LI
 ></UL
-></P
+>&#13;</P
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN587"
->13.1. Watchdogs: Keeping Slons Running</A
+NAME="AEN665"
+>5.1. Watchdogs: Keeping Slons Running</A
 ></H2
 ><P
->There are a couple of "watchdog" scripts available that monitor things, and restart the slon processes should they happen to die for some reason, such as a network "glitch" that causes loss of connectivity.&#13;</P
+>There are a couple of "watchdog" scripts available that monitor
+things, and restart the slon processes should they happen to die for
+some reason, such as a network "glitch" that causes loss of
+connectivity.&#13;</P
 ><P
 >You might want to run them...&#13;</P
 ></DIV
@@ -132,19 +137,30 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN591"
->13.2. Alternative to Watchdog: generate_syncs.sh</A
+NAME="AEN669"
+>5.2. Alternative to Watchdog: generate_syncs.sh</A
 ></H2
 ><P
->A new script for Slony-I 1.1 is "generate_syncs.sh", which addresses the following kind of situation.&#13;</P
-><P
->Supposing you have some possibly-flakey slon daemon that might not run all the time, you might return from a weekend away only to discover the following situation...&#13;</P
-><P
->On Friday night, something went "bump" and while the database came back up, none of the slon daemons survived.  Your online application then saw nearly three days worth of heavy transactions.&#13;</P
-><P
->When you restart slon on Monday, it hasn't done a SYNC on the master since Friday, so that the next "SYNC set" comprises all of the updates between Friday and Monday.  Yuck.&#13;</P
+>A new script for Slony-I 1.1 is "generate_syncs.sh", which
+addresses the following kind of situation.&#13;</P
 ><P
->If you run generate_syncs.sh as a cron job every 20 minutes, it will force in a periodic SYNC on the "master" server, which means that between Friday and Monday, the numerous updates are split into more than 100 syncs, which can be applied incrementally, making the cleanup a lot less unpleasant.&#13;</P
+>Supposing you have some possibly-flakey slon daemon that might
+not run all the time, you might return from a weekend away only to
+discover the following situation...&#13;</P
+><P
+>On Friday night, something went "bump" and while the database
+came back up, none of the slon daemons survived.  Your online
+application then saw nearly three days worth of heavy transactions.&#13;</P
+><P
+>When you restart slon on Monday, it hasn't done a SYNC on the
+master since Friday, so that the next "SYNC set" comprises all of the
+updates between Friday and Monday.  Yuck.&#13;</P
+><P
+>If you run generate_syncs.sh as a cron job every 20 minutes, it
+will force in a periodic SYNC on the "master" server, which means that
+between Friday and Monday, the numerous updates are split into more
+than 100 syncs, which can be applied incrementally, making the cleanup
+a lot less unpleasant.&#13;</P
 ><P
 >Note that if SYNCs <SPAN
 CLASS="emphasis"
@@ -152,30 +168,33 @@
 CLASS="EMPHASIS"
 >are</I
 ></SPAN
-> running regularly, this script won't bother doing anything.&#13;</P
+> running regularly, this script
+won't bother doing anything.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN600"
->13.3. Log Files</A
+NAME="AEN678"
+>5.3. Log Files</A
 ></H2
 ><P
->Slon daemons generate some more-or-less verbose log files, depending on what debugging level is turned on.  You might assortedly wish to:
+>Slon daemons generate some more-or-less verbose log files,
+depending on what debugging level is turned on.  You might assortedly
+wish to:
+
 <P
 ></P
 ><UL
 ><LI
 ><P
-> Use a log rotator like Apache rotatelogs to have a sequence of log files so that no one of them gets too big;
-
-	</P
+> Use a log rotator like Apache rotatelogs to have a
+sequence of log files so that no one of them gets too big;&#13;</P
 ></LI
 ><LI
 ><P
-> Purge out old log files, periodically.</P
+> Purge out old log files, periodically.&#13;</P
 ></LI
 ></UL
 >
@@ -235,7 +254,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -33,18 +33,18 @@
 KirovOpenSourceCommunity: Slony</ulink> may be the place to go
 </itemizedlist>
 
-<sect1><title/ Other Information Sources/
+<sect2><title> Other Information Sources</title>
 <itemizedlist>
 
-<listitem><Para> <ulink
-url="http://comstar.dotgeek.org/postgres/slony-config/">
+<listitem><para> <ulink url=
+"http://comstar.dotgeek.org/postgres/slony-config/">
 slony-config</ulink> - A Perl tool for configuring Slony nodes using
 config files in an XML-based format that the tool transforms into a
-Slonik script
+Slonik script</para></listitem>
 
 </itemizedlist>
 
-
+</sect2>
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
--- /dev/null
+++ doc/adminguide/slonik_ref.sgml
@@ -0,0 +1,1702 @@
+<article id="slonikcommands><title/Slonik Command Summary/
+<sect1 id="introduction"><title/Slonik Command Summary/
+<sect2><title/Introduction/
+
+<para>
+	<application/Slonik/ is a command line utility designed
+	specifically to setup and modify configurations of the
+	<productname/Slony-I/ replication system.
+
+
+<sect2 id="outline">
+<title>General outline</title>
+
+<para>
+	The <application/slonik/ commandline utility is supposed to be
+	used embedded into shell scripts and reads commands from files
+	or stdin (via here documents for example). Nearly all of the
+	<emphasis>real</emphasis> configuration work is done by
+	calling stored procedures after loading the
+	<productname/Slony-I/ support base into a database.  You may
+	find documentation for those procedures in the <a
+	href="schemadoc.html"> <productname/Slony-I/ Schema
+	Documentation </a>, as well as in comments associated with
+	them in the database.
+</para>
+
+      <para>
+	<Application/Slonik/ was created because:
+      <itemizedlist>
+
+	<listitem><para> The stored procedures have special requirements as to on
+	which particular node in the replication system they are
+	called,
+
+	<listitem><para> the lack of named parameters for stored procedures makes
+	it rather hard to do this from the psql prompt, and
+
+	<listitem><para> psql lacks the ability to maintain multiple connections
+	with open transactions.
+      </itemizedlist>
+</para>
+<para>
+	
+</para>
+<sect3><title>Commands</title>
+<para>
+	The slonik command language is format free. Commands begin with
+	keywords and are terminated with a semicolon. Most commands have
+	a list of parameters, some of which have default values and are
+	therefore optional. The parameters of commands are enclosed in
+	parentheses. Each option consists of one or more keywords, followed
+	by an equal sign, followed by a value. Multiple options inside the
+	parentheses are separated by commas. All keywords are case
+	insensitive.  The language should remind the reader of SQL.
+</para>
+<para>
+	Option values may be:
+	<itemizedlist>
+		<listitem><para>integer values
+		<listitem><para>string literals enclosed in single quotes
+		<listitem><para>boolean values {TRUE|ON|YES} or {FALSE|OFF|NO}
+		<listitem><para>keywords for special cases
+	</itemizedlist>
+</para>
+<sect3><title>Comments</title>
+<para>
+	Comments begin at a hash sign (#) and extend to the end of the line.
+</para>
+<sect3><title>Command groups</title>
+<para>
+	Commands can be combined into groups of commands with optional
+	<command>on error</command> and <command>on success</command> conditionals.
+	The syntax for this is:
+	<br>
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;try {
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;&lt;commands&gt;
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;}
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;[on error { &lt;commands&gt; }]
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;[on success { &lt;commands&gt; }]
+	<br>
+	<br>
+	Those commands are grouped together into one transaction per
+	participating node.
+</para>
+
+</div>
+
+<!-- ************************************************************ -->
+<sect2 id="hdrcmds">
+<title>Commands affecting Slonik</title>
+</a>
+
+<para>
+	The following commands must appear as a <quote/preamble/ at the very
+	top of every <application/slonik/ command script. They do not cause any
+	direct action on any of the nodes in the replication system,
+	but affect the execution of the entire script.
+</para>
+
+<!-- **************************************** -->
+
+<sect3 ="clustername"><title>CLUSTER NAME</title>
+
+
+<sect3><title>Synopsis:</title>
+	CLUSTER NAME = &lt;string&gt;;
+<sect3><title>Description:</title>
+<para>
+	Must be the very first command in every <application/slonik/ script. Defines
+	the namespace in which all <productname/Slony-I/ specific functions,
+	procedures, tables and sequences are defined. The namespace
+	name is built by prefixing the given string literal with an
+	underscore. This namespace will be identical in all databases
+	that participate in the same replication group. 
+</para>
+
+<para>
+                  No user objects are supposed to live in this
+                  namespace and the namespace is not allowed to exist
+                  prior to adding a database to the replication
+                  system.  Thus, if you add a new node using <tt>
+                  pg_dump -s </tt> on a database that is already in
+                  the cluster of replicated databases, you will need
+                  to drop the namespace via the SQL command <tt> DROP
+                  SCHEMA _testcluster CASCADE; </tt>.
+</para>
+<sect3><title>Example:</title>
+<para>
+	CLUSTER NAME = 'testcluster';
+</para>
+
+
+
+<!-- **************************************** -->
+
+<sect3 ="admconninfo"><title>ADMIN CONNINFO</title>
+
+<sect3><title>Synopsis:</title>
+	NODE &lt;ival&gt ADMIN CONNINFO = &lt;string&gt;;
+<sect3><title>Description:</title>
+
+<para>
+	Describes how the <application/slonik/ utility can reach a nodes database in
+	the cluster from where it is run (likely the DBA's
+	workstation). The conninfo string is the string agrument given
+	to the PQconnectdb() libpq function. The user as to connect
+	must be the special replication superuser, as some of the
+	actions performed later may include operations that are
+	strictly reserved for database superusers by PostgreSQL.
+</para>
+<para>
+	The <application/slonik/ utility will not try to connect to the databases
+	unless some subsequent command requires the connection.
+</para>
+<para>
+	Note: As mentioned in the original documents, <productname/Slony-I/ is designed as an
+	enterprise replication system for data centers. It has been assumed
+	throughout the entire development that the database servers and
+	administrative workstations involved in replication and/or setup
+	and configuration activities can use simple authentication schemes
+	like <tt>trust</tt>.   Alternatively, libpq can read passwords from
+                 <tt> .pgpass </tt>.
+</para>
+
+<para>
+	Note: If you need to change the DSN information for a node, as
+	would happen if the IP address for a host were to change, you
+	may submit the new information using this command, and that
+	configuration will be propagated.  Existing <tt> slon </tt>
+	processes will need to be restarted in order to become aware
+	of the configuration change.
+</para>
+<sect3><title>Example:</title>
+<para>
+	NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_echo"><title>ECHO</title>
+
+
+<sect3><title>Synopsis:</title>
+	ECHO &lt;string&gt;;
+<sect3><title>Description:</title>
+<para>
+	Prints the string literal on standard output.
+</para>
+<sect3><title>Example:</title>
+<para>
+	ECHO 'Node 1 initialized successfully';
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 id="stmt_exit"><title>EXIT</title>
+
+<sect3><title>Synopsis:</title>
+	EXIT [-]&lt;ival&gt;;
+<sect3><title>Description:</title>
+<para>
+	Terminates script execution immediately, rolling back every
+	open transaction on all database connections. The <application/slonik/ utility
+	will return the given value as its program termination code.
+</para>
+<sect3><title>Example:</title>
+<para>
+	EXIT 0;
+</para>
+</div>
+
+
+</div>
+
+<!-- ************************************************************ -->
+<sect2 id="cmds">
+<title>Configuration and Action commmands</title>
+
+<div style="margin-left:40px; margin-right:80px;">
+
+<!-- **************************************** -->
+
+<sect3 id="stmt_init_cluster"><title>INIT CLUSTER</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	INIT CLUSTER ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Initialize the first node in a new <productname/Slony-I/ replication cluster.
+	The initialization process consists of creating the cluster
+	namespace, loading all the base tables, functions, procedures
+	and initializing the node.
+</para>
+<para>
+	For this process to work, the SQL scripts of the <productname/Slony-I/ system
+	must be installed on the DBA workstation (the computer currently
+	executing the <application/slonik/ utility), while on the system where the
+	node database is running the shared objects of the <productname/Slony-I/ system 
+	must be installed in the PostgreSQL library directory. Also the
+	procedural language PL/pgSQL is assumed to be installed in the
+	target database already.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The unique, numeric ID number of the node. This MUST be 1.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>COMMENT = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		A descriptive text added to the node entry in the
+		table sl_node.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	INIT CLUSTER (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;COMMENT = 'Node 1'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 id="stmt_store_node"><title>STORE NODE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	STORE NODE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Initialize a new node and add it to the configuration of
+	an existing cluster.
+</para>
+<para>
+	The initialization process consists of creating the cluster
+	namespace in the new node (the database itself must already
+	exist), loading all the base tables, functions, procedures
+	and initializing the node. The existing configuration of the
+	rest of the cluster is copied from the <b>event node</b>.
+</para>
+<para>
+	The same installation requirements as for the <command>init cluster</command>
+	command apply.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The unique, numeric ID number of the new node.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>COMMENT = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		A descriptive text added to the node entry in the
+		table sl_node.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>SPOOLNODE = &lt;boolean&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Specifies that the new node is a virtual spool node for
+		file archiving of replication log. If true <application/slonik/ will not
+		attempt to initialize a database with the replication
+		schema.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EVENT NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The ID of the node used to create the configuration event
+		that tells all existing nodes about the new node. Default
+		value is 1.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	STORE NODE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 2,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;COMMENT = 'Node 2'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_drop_node"><title>DROP NODE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	DROP NODE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Drop a node. This command removes the specified node entirely from
+	the replication systems configuration. If the replication daemon
+	is still running on that node (and processing events), it will
+	attempt to uninstall the replication system and terminate itself.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the system to remove.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EVENT NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the system to generate the event.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	DROP NODE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 2
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_uninstall_node"><title>UNINSTALL NODE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	UNINSTALL NODE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Restores all tables to the unlocked state, with all original
+	user triggers, constraints and rules, eventually added <productname/Slony-I/
+	specific serial key columns dropped and the <productname/Slony-I/ schema
+	dropped. The node becomes a standalone database. The data is
+	left untouched.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the system to uninstall.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	UNINSTALL NODE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 3
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_restart_node"><title>RESTART NODE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	RESTART NODE &lt;ival&gt;;
+<sect3><title>Description:</title>
+<para>
+	Causes an eventually running replication daemon on the specified
+	node to shutdown and restart itself. Theoretically this command
+	should be obsolete. In practice, TCP timeouts can delay critical
+	configuration changes to actually happen in the case where a former
+	forwarding node failed and needs to be bypassed by subscribers.
+</para>
+<sect3><title>Example:</title>
+<para>
+	RESTART NODE 2;
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_store_path"><title>STORE PATH</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	STORE PATH ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Configures how the replication daemon of one node connects to the
+	database of another node. If the replication system is supposed
+	to use a special backbone network segment, this is the place to
+	user the special IP addresses or hostnames. An existing
+	configuration can be overwritten.
+</para>
+<para>
+	The conninfo string must contain all information to connect to the
+	database as the replication superuser. The names <b>server</b> or
+	<b>client</b> have nothing to do with the particular role of a node
+	within the cluster configuration. It should be simply viewed as
+	"the <b>server</b> has the message or data that the <b>client</b>
+	is supposed to get". For a simple 2 node setup, paths into both
+	directions must be configured.
+</para>
+<para>
+	It does not do any harm to configure path information from every
+	node to every other node (full cross product). The connections
+	are not established unless they are required to actually transfer
+	events or confirmations because of <b>listen</b> entries or data
+	because of <b>subscriptions</b>.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>SERVER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the database to connect to.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>CLIENT = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the replication daemon connecting.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>CONNINFO = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		PQconnectdb() argument to establish the connection.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>CONNRETRY = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		Number of seconds to wait before another attempt to connect
+		is made in case the server is unavailable. Default is 10.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	STORE PATH (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;SERVER = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;CLIENT = 2,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;CONNINFO = 'dbname=testdb host=server1 user=slony'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_drop_path"><title>DROP PATH</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	DROP PATH ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Remove the connection information between <b>server</b> and
+	<b>client</b>. 
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>SERVER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the server of this connection.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>CLIENT = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the client.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EVENT NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The ID of the node used to create the configuration event
+		that tells all existing nodes about dropping the path.
+		Defaults to the <b>client</b>.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	DROP PATH (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;SERVER = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;CLIENT = 2
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_store_listen"><title>STORE LISTEN</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	STORE LISTEN ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	A <b>listen</b> entry causes a node (receiver) to query an event
+	provider for events that originate from a specific node, as well
+	as confirmations from every existing node. It requires a <b>path</b>
+	to exist so that the receiver (as client) can connect to the provider
+	(as server).
+</para>
+<para>
+	Every node in the system must listen for events from every other
+	node in the system. As a general rule of thumb, a subscriber (see
+	<a href="#stmt_subscribe_set">SUBSCRIBE SET</a>) should listen for
+	events of the set's origin on the same provider, where it receives
+	the data from. In turn, the origin of the data set should listen
+	for events from the origin in the opposite direction. A node can
+	listen for events from one and the same origin on different
+	providers at the same time. However, to process SYNC events from
+	that origin, all data providers must have the same or higher sync
+	status, so this will not result in any faster replication behaviour.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the event origin the receiver is listening for.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>PROVIDER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		ID of the node from which the receiver gets events from
+		the origin. Default is the origin.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>RECEIVER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the node receiving the events.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	STORE LISTEN (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;RECEIVER = 2,
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_drop_listen"><title>DROP LISTEN</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	DROP LISTEN ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Remove a <b>listen</b> configuration entry.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the event origin the receiver is listening for.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>PROVIDER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		ID of the node from which the receiver gets events from
+		the origin. Default is the origin.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>RECEIVER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the node receiving the events.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	DROP LISTEN (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;RECEIVER = 2,
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_table_add_key"><title>TABLE ADD KEY</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	TABLE ADD KEY ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	In the <productname/Slony-I/ replication system, every replicated table is
+	required to have at least one UNIQUE constraint who's columns
+	are declared <tt>NOT NULL.</tt> Any primary key satisfies this
+	requirement.
+</para>
+<para>
+	As a last resort, this command can be used to add such an
+	attribute to a table that does not have a primary key. Since
+	this modification can have unwanted side effects, <b>it is
+	strongly recommended that users add a unique and not null
+	attribute by other means.</b>
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>NODE ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set origin where the table will be added as
+		set member (See <a href="stmt_set_add_table">SET ADD TABLE</a>).
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>FULLY QUALIFIED NAME = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The full name of the table consisting of the schema and table name
+		as the expression
+		<br><emphasis>quote_ident(nspname) || '.' || quote_ident(relname)</emphasis>
+		<br>would return it.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	TABLE ADD KEY (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;NODE ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;FULLY QUALIFIED NAME = 'public.history'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_create_set"><title>CREATE SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	CREATE SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	In the <productname/Slony-I/ replication system, replicated tables are
+	organized in sets. As a general rule of thumb, a set should
+	contain all the tables of one application, that have
+	relationships.  In a well designed application, this is equal
+	to all the tables in one schema.
+</para>
+<para>
+	The smallest unit one node can subscribe for replication from
+	another node is a set. A set always has an origin. In
+	classical replication terms, that would be the "master."
+	Since in <productname/Slony-I/ a node can be the "master" over one set,
+	while receiving replication data in the "slave" role for
+	another at the same time, this terminology may easily become
+	misleading and should therefore be replaced with <b>set
+	origin</b> and <b>subscriber</b>.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the set to be created.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Initial origin of the set.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>COMMENT = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		A descriptive text added to the set entry.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	CREATE SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;COMMENT = 'Tables of ticket system'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_drop_set"><title>DROP SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	DROP SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Drop a set of tables from the <productname/Slony-I/ configuration. This
+	automatically unsubscribes all nodes from the set and restores
+	the original triggers and rules on all subscribers.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the set to be dropped.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	DROP SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 5,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_merge_set"><title>MERGE SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	MERGE SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Merge a set of tables and sequences into another one. This
+	function is a workaround for the problem that it is not
+	possible to add tables/sequences to already-subscribed
+	sets. One may create a temporary set, add the new objects to
+	that, subscribe all nodes currently subscribed to the other
+	set to this new one, and then merge the two together.
+</para>
+<para>
+	This request will fail if the two sets do not have exactly the
+	same set of subscribers.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the set to contain the union of the two separate sets.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ADD ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the set whos objects should be transferred.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the two sets.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	MERGE SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 2,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ADD ID = 9999,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_set_add_table"><title>SET ADD TABLE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SET ADD TABLE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Add an existing user table to a replication set. The set
+	cannot currently be subscribed by any other node - that
+	functionality is supported by the<tt>MERGE SET</tt> command.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>SET ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to which the table is added.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set. A future version of <application/slonik/
+		might figure out this information by itself.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the table. These ID's are not only used to
+		uniquely identify the individual table within the replication
+		system. The numeric value of this ID also determines the order
+		in which the tables are locked in a 
+		<a href="#stmt_lock_set">LOCK SET</a> command for example. So these
+		numbers should represent any applicable table hierarchy to
+		make sure the <application/slonik/ command scripts do not deadlock at any
+		critical moment.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>FULLY QUALIFIED NAME = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The full table name as described in
+		<a href="#table_add_key">TABLE ADD KEY</a>.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>KEY = { &lt;string&gt; | SERIAL }</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The index name that covers the unique and not null column set
+		to be used as the row identifier for replication purposes. Or the
+		keyword SERIAL to use the special column added with a previous
+		<a href="#table_add_key">TABLE ADD KEY</a> command. Default
+		is to use the table's primary key.  The index name is <emphasis> not </emphasis> 
+		fully qualified; you must omit the namespace.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>COMMENT = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		A descriptive text added to the <productname/Slony-I/ configuration data.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SET ADD TABLE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;SET ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 20,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;FULLY QUALIFIED NAME = 'public.tracker_ticket',
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;COMMENT = 'Support ticket'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_set_add_sequence"><title>SET ADD SEQUENCE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SET ADD SEQUENCE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Add an existing user sequence to a replication set. The set
+	cannot currently be subscribed by any other node - that
+	functionality is supported by the MERGE SET command.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>SET ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to which the sequence is added.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set. A future version of <application/slonik/
+		might figure out this information by itself.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the sequence.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>FULLY QUALIFIED NAME = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The full sequence name as described in
+		<a href="#table_add_key">TABLE ADD KEY</a>.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>COMMENT = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		A descriptive text added to the <productname/Slony-I/ configuration data.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SET ADD SEQUENCE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;SET ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 21,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;FULLY QUALIFIED NAME = 'public.tracker_ticket_id_seq',
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;COMMENT = 'Support ticket id sequence'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_set_drop_table"><title>SET DROP TABLE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SET DROP TABLE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Drop an existing user table to a replication set.
+</para>
+
+<para>
+	  Note that this action will <em> not </em> drop a candidate
+	  primary key created using <TT> TABLE ADD KEY </tt>.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set. A future version of <application/slonik/
+		might figure out this information by itself.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the table. 
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SET DROP TABLE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 20,
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_set_drop_sequence"><title>SET DROP SEQUENCE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SET DROP SEQUENCE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Drops an existing user sequence from a replication set.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the sequence.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SET DROP SEQUENCE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 21,
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_set_move_table"><title>SET MOVE TABLE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SET MOVE TABLE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Change the set a table belongs to. The current set and the new set
+	must origin on the same node and subscribed by the same nodes.
+	CAUTION: Due to the way subscribing to new sets works make
+	absolutely sure that the subscription of all nodes to the sets
+	is completely processed before moving tables. Moving a table too
+	early to a new set causes the subscriber to try and add the table
+	already during the subscription process, which fails with a duplicate
+	key error and breaks replication.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set. A future version of <application/slonik/
+		might figure out this information by itself.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the table. 
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>NEW SET = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the new set. 
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SET MOVE TABLE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 20,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;NEW SET = 3
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_set_move_sequence"><title>SET MOVE SEQUENCE</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SET MOVE SEQUENCE ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Change the set a sequence belongs to. The current set and the new set
+	must origin on the same node and subscribed by the same nodes.
+	CAUTION: Due to the way subscribing to new sets works make
+	absolutely sure that the subscription of all nodes to the sets
+	is completely processed before moving sequences. Moving a sequence too
+	early to a new set causes the subscriber to try and add the sequence
+	already during the subscription process, which fails with a duplicate
+	key error and breaks replication.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The current origin of the set. A future version of <application/slonik/
+		might figure out this information by itself.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the sequence. 
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>NEW SET = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Unique ID of the new set. 
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SET MOVE SEQUENCE (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 54,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;NEW SET = 3
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_store_trigger"><title>STORE TRIGGER</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	STORE TRIGGER ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	By default, all user defined triggers and constraints are
+	disabled on all subscriber nodes while a table is
+	replicated. This command can be used to explicitly exclude a
+	trigger from being disabled.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>TABLE ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The unique, numeric ID number of the table the trigger is defined for.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>TRIGGER NAME = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The name of the trigger as it appears in the pg_trigger
+		system catalog.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EVENT NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The ID of the node used to create the configuration event
+		that tells all existing nodes about the special trigger. Default
+		value is 1.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	STORE TRIGGER (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;TABLE ID = 2,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;TRIGGER NAME = 'cache_invalidation'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_drop_trigger"><title>DROP TRIGGER</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	DROP TRIGGER ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Remove the special handling for the specified trigger.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>TABLE ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The unique, numeric ID number of the table the trigger is defined for.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>TRIGGER NAME = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The name of the trigger as it appears in the pg_trigger
+		system catalog.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EVENT NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The ID of the node used to create the configuration event
+		that tells all existing nodes about removing the special trigger. Default
+		value is 1.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	DROP TRIGGER (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;TABLE ID = 2,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;TRIGGER NAME = 'cache_invalidation'
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_subscribe_set"><title>SUBSCRIBE SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	SUBSCRIBE SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Causes a node (subscriber) to start replicating a set of
+	tables either from the origin or from another provider node,
+	which must be a currently forwarding subscriber itself.
+</para>
+<para>
+	The application tables contained in the set must already exist
+	and should ideally be currently empty. The current version of
+	<productname/Slony-I/ will not attempt to copy the schema of the set. The
+	replication daemon will start copying the current content of
+	the set from the given provider and then try to catch up with
+	any update activity that happened during that copy
+	process. After successful subscription, the tables are guarded
+	on the subscriber using triggers against accidental updates by
+	the application.
+</para>
+
+<para>
+	Note: If you need to revise subscription information for a
+	node, you may submit the new information using this command,
+	and the new configuration will be propagated throughout the
+	replication network.  The normal reason to revise this
+	information is that you want a node to subscribe to a <emphasis>
+	different </emphasis> provider node, or for a node to become a
+	"forwarding" subscriber so it may later become the provider
+	for a later subscriber.
+
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to subscribe
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>PROVIDER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the data provider where this set is subscribed from.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>RECEIVER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the new subscriber.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>FORWARD = &lt;boolean&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Flag whether or not the new subscriber should store
+		the log information during replication to make it
+		possible candidate for the provider role for future
+		nodes.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	SUBSCRIBE SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;PROVIDER = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;RECEIVER = 3,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;FORWARD = YES
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_unsubscribe_set"><title>UNSUBSCRIBE SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	UNSUBSCRIBE SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Stops the subscriber from replicating the set. The tables are
+	opened up for full access by the client application on the
+	former subscriber. The tables are not truncated or otherwise
+	modified. All original triggers, rules and constraints are
+	restored.
+</para>
+<para>
+	<b>Warning!</b> Resubscribing an unsubscribed set requires a
+	<emphasis>complete fresh copy</emphasis> of data from the provider to be
+	transferred since the tables have been subject to possible
+	independent modifications.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to unsubscribe.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>RECEIVER = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the subscriber.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	UNSUBSCRIBE SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;RECEIVER = 3
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_lock_set"><title>LOCK SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	LOCK SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Guards a replication set against client application updates in
+	preparation for a <a href="#stmt_move_set">MOVE SET</a>
+	command.
+</para>
+<para>
+	This command must be the first in a possible statement group
+	(<tt>try</tt>).  The reason for this is that it needs to
+	commit the changes made to the tables (adding a special
+	trigger function) before it can wait for every concurrent
+	transaction to finish. At the same time it cannot hold an open
+	transaction to the same database itself since this would
+	result in blocking itself forever.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to lock.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the current set origin.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	LOCK SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_unlock_set"><title>UNLOCK SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	UNLOCK SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Unlock a previously locked set.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to unlock.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the current set origin.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	UNLOCK SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = 1
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_move_set"><title>MOVE SET</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	MOVE SET ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Changes the origin of a set from one node to another. The new origin
+	must be a current subscriber of the set. The set must currently be
+	locked on the old origin. 
+</para>
+<para>
+	After this command, the set cannot be unlocked on the old
+	origin any more. The old origin will continue as a forwarding
+	subscriber of the set and the subscription chain from the old
+	origin to the new origin will be reversed, hop by hop. As soon
+	as the new origin has finished processing the event (that
+	includes any outstanding sync events that happened before,
+	<emphasis>i.e.</i> fully catching up), the new origin will take over
+	and open all tables in the set for client application update
+	activity.
+</para>
+<para>
+	This is not failover, as it requires a fully functional old
+	origin node.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the set to transfer.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>OLD ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the current set origin.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>NEW ORIGIN = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		Node ID of the new set origin.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	MOVE SET (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;OLD ORIGIN = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;NEW ORIGIN = 3
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_failover"><title>FAILOVER</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	FAILOVER ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	WARNING: This command will abandon the status of the failed
+	node.  There is no possibility to let the failed node join the
+	cluster again without rebuilding it from scratch as a slave
+	slave.
+</para>
+<para>
+	The failover command causes the backup node to take over all
+	sets that currently originate on the failed node. <Application/Slonik/ will
+	contact all other direct subscribers of the failed node to
+	determine which node has the highest sync status for each
+	set. If another node has a higher sync status than the backup
+	node, the replication will first be redirected so that the
+	backup node replicates against that other node, before
+	assuming the origin role and allowing update activity.
+</para>
+<para>
+	After successful failover, all former direct subscribers of
+	the failed node become direct subscribers of the backup
+	node. The failed node can and should be removed from the
+	configuration with <tt>DROP NODE</tt>.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the failed node.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>BACKUP NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		ID of the node that will take over all sets.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	FAILOVER (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;BACKUP NODE = 1,
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_ddl_script"><title>EXECUTE SCRIPT</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	EXECUTE SCRIPT ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Executes a script containing arbitrary SQL statements on all
+	nodes that are subscribed to a set at a common controlled
+	point within the replication transaction stream.
+</para>
+<para>
+	The specified event origin must be the origin of the set.  The
+	script file must not contain any START or COMMIT TRANSACTION
+	calls.  (This may change in PostgreSQL 8.0 once nested
+	transactions, aka savepoints, are supported)  In addition,
+	non-deterministic DML statements (like updating a field with
+	CURRENT_TIMESTAMP) must be avoided, since the data changes
+	done by the script are explicitly not replicated.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>SET ID = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The unique, numeric ID number of the set affected by the script.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>FILENAME = &lt;string&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The name of the file containing the SQL script to execute.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EVENT NODE = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The ID of the current origin of the set.
+		The default value is 1.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>EXECUTE ONLY ON = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		<b>(Optional)</b>
+		The ID of the only node to actually execute the script.
+		This option causes the script to be propagated by all nodes
+		but executed only by one.
+		The default is to execute the script on all nodes that are
+		subscribed to the set.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	EXECUTE SCRIPT (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;SET ID = 1,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;FILENAME = 'changes_20040510.sql',
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;EVENT NODE = 1
+	<br>);
+</para>
+</div>
+
+
+<!-- **************************************** -->
+
+<sect3 ="stmt_wait_event"><title>WAIT FOR EVENT</title>
+
+<div style="margin-left:40px; margin-right:0px;">
+<sect3><title>Synopsis:</title>
+	WAIT FOR EVENT ( &lt;options&gt; );
+<sect3><title>Description:</title>
+<para>
+	Waits for event confirmation.
+</para>
+<para>
+	<Application/Slonik/ remembers the last event generated on every node during
+	script execution (events generated by earlier calls are
+	currently not checked). In certain situations it is necessary
+	that events generated on one node (such as <tt>CREATE
+	SET</tt>) are processed on another node before issuing more
+	commands (for instance, <a
+	href="#stmt_subscribe_set">SUBSCRIBE SET</a>).  <tt>WAIT FOR
+	EVENT</tt> may be used to cause the <application/slonik/ script to wait
+	until the subscriber node is ready for the next action.
+</para>
+<para>
+	<tt>WAIT FOR EVENT</tt> must be called outside of any try
+	block in order to work, since new confirm messages don't
+	become visible within a transaction.
+</para>
+<table border="0" cellpadding="10">
+<tr>
+	<td align="left" valign="top" nowrap><b>ORIGIN = &lt;ival&gt;|ALL</b></td>
+	<td align="left" valign="top"><para>
+		The origin of the event(s) to wait for.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>CONFIRMED = &lt;ival&gt;|ALL</b></td>
+	<td align="left" valign="top"><para>
+		The node ID of the receiver that must have confirmed the event(s).
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>WAIT ON = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The ID of the node where to check the sl_confirm table.
+		The default value is 1.
+	</para></td>
+</tr>
+<tr>
+	<td align="left" valign="top" nowrap><b>TIMEOUT = &lt;ival&gt;</b></td>
+	<td align="left" valign="top"><para>
+		The number of seconds to wait. Default is 600 (10 minutes),
+		0 means wait forever.
+	</para></td>
+</tr>
+</table>
+<sect3><title>Example:</title>
+<para>
+	WAIT FOR EVENT (
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;ORIGIN = ALL,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;CONFIRMED = ALL,
+	<br>&nbsp;&nbsp;&nbsp;&nbsp;WAIT ON = 1
+	<br>);
+
+
+<sect1 id="slonikprocs"> <title/Slonik Stored Procedures/
+
+<para> The commands used in <Application/Slonik/ invoke stored procedures in the
+namespace created for the replication instance.  <Application/Slonik/ provides one
+convenient way to invoke these procedures; it is just as possible to
+invoke them directly to manage the <productname/Slony-I/ instances. </para>
+
+<para> See the <a href= "schemadoc.html"> Schema Documentation </a> for
+more details. </para>
Index: monitoring.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/monitoring.html -Ldoc/adminguide/monitoring.html -u -w -r1.1 -r1.2
--- doc/adminguide/monitoring.html
+++ doc/adminguide/monitoring.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE=" Subscribing Nodes"
 HREF="subscribenodes.html"><LINK
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="MONITORING"
->12. Monitoring</A
+>4. Monitoring</A
 ></H1
 ><P
 >Here are some of things that you may find in your Slony logs, and explanations of what they mean. &#13;</P
@@ -88,23 +88,32 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN565"
->12.1. CONFIG notices</A
+NAME="AEN645"
+>4.1. CONFIG notices</A
 ></H2
 ><P
 >These entries are pretty straightforward. They are informative messages about your configuration. &#13;</P
 ><P
->Here are some typical entries that you will probably run into in your logs:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
+>Here are some typical entries that you will probably run into in your logs:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >CONFIG main: local node id = 1
 CONFIG main: loading current cluster configuration
 CONFIG storeNode: no_id=3 no_comment='Node 3'
 CONFIG storePath: pa_server=5 pa_client=1 pa_conninfo="host=127.0.0.1 dbname=foo user=postgres port=6132" pa_connretry=10
 CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3
 CONFIG storeSet: set_id=1 set_origin=1 set_comment='Set 1'
-CONFIG main: configuration complete - starting threads</TT
+    CONFIG main: configuration complete - starting threads</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ></DIV
 ><DIV
@@ -112,19 +121,28 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN571"
->12.2. DEBUG Notices</A
+NAME="AEN650"
+>4.2. DEBUG Notices</A
 ></H2
 ><P
->Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
+>Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >localListenThread: This is the local thread that listens for events on the local node.
 remoteWorkerThread_X: The thread processing remote events.
 remoteListenThread_X: Listens for events on a remote node database.
 cleanupThread: Takes care of things like vacuuming, cleaning out the confirm and event tables, and deleting logs.
-syncThread: Generates sync events.</TT
+    syncThread: Generates sync events.</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 > WriteMe: I can't decide the format for the rest of this. I
@@ -187,7 +205,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: filelist.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/filelist.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/filelist.sgml -Ldoc/adminguide/filelist.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/filelist.sgml
+++ doc/adminguide/filelist.sgml
@@ -9,7 +9,6 @@
 <!entity defineset           SYSTEM "defineset.sgml">
 <!entity adminscripts           SYSTEM "adminscripts.sgml">
 <!entity startslons           SYSTEM "startslons.sgml">
-<!entity slonconfig           SYSTEM "slonconfig.sgml">
 <!entity subscribenodes           SYSTEM "subscribenodes.sgml">
 <!entity monitoring           SYSTEM "monitoring.sgml">
 <!entity maintenance           SYSTEM "maintenance.sgml">
@@ -22,8 +21,9 @@
 <!entity firstdb           SYSTEM "firstdb.sgml">
 <!entity help           SYSTEM "help.sgml">
 <!entity faq           SYSTEM "faq.sgml">
-
-
+<!entity reference     SYSTEM "reference.sgml">
+<!entity slonik SYSTEM "slonik.sgml">
+<!entity slon SYSTEM "slon.sgml">
 
 <!entity history    SYSTEM "history.sgml">
 <!entity legal      SYSTEM "legal.sgml">
Index: faq.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/faq.html -Ldoc/adminguide/faq.html -u -w -r1.1 -r1.2
--- doc/adminguide/faq.html
+++ doc/adminguide/faq.html
@@ -2,7 +2,7 @@
 <HTML
 ><HEAD
 ><TITLE
-> FAQ </TITLE
+>Slony-I FAQ</TITLE
 ><META
 NAME="GENERATOR"
 CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
@@ -12,8 +12,8 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="PREVIOUS"
-TITLE=" Other Information Sources"
-HREF="x931.html"><LINK
+TITLE=" More Slony-I Help "
+HREF="help.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
@@ -45,7 +45,7 @@
 ALIGN="left"
 VALIGN="bottom"
 ><A
-HREF="x931.html"
+HREF="help.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -71,9 +71,18 @@
 ><H1
 CLASS="TITLE"
 ><A
-NAME="FAQ"
->FAQ</A
+NAME="AEN1025"
+>Slony-I FAQ</A
 ></H1
+><H3
+CLASS="CORPAUTHOR"
+>The Slony Global Development Group</H3
+><H3
+CLASS="AUTHOR"
+><A
+NAME="AEN1028"
+>Christopher  Browne</A
+></H3
 ><HR></DIV
 ><P
 > Not all of these are, strictly speaking, <SPAN
@@ -84,26 +93,25 @@
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
->trouble found that seemed worth
-documenting</I
+>trouble found that seemed
+worth documenting</I
 ></SPAN
->.
-
- </P
+>.</P
 ><DIV
 CLASS="QANDASET"
 ><DL
 ><DT
 >Q: <A
-HREF="faq.html#AEN944"
+HREF="faq.html#AEN1036"
 >I looked for the <CODE
 CLASS="ENVAR"
 >_clustername</CODE
-> namespace, and it wasn't there.</A
+> namespace, and
+it wasn't there.</A
 ></DT
 ><DT
 >Q: <A
-HREF="faq.html#AEN955"
+HREF="faq.html#AEN1047"
 >Some events moving around, but no replication&#13;</A
 ></DT
 ></DL
@@ -114,13 +122,14 @@
 ><P
 ><BIG
 ><A
-NAME="AEN944"
+NAME="AEN1036"
 ></A
 ><B
 >Q: I looked for the <CODE
 CLASS="ENVAR"
 >_clustername</CODE
-> namespace, and it wasn't there.</B
+> namespace, and
+it wasn't there.</B
 ></BIG
 ></P
 ></DIV
@@ -155,7 +164,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN955"
+NAME="AEN1047"
 ></A
 ><B
 >Q: Some events moving around, but no replication&#13;</B
@@ -164,10 +173,19 @@
 ><P
 > Slony logs might look like the following:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432'
-ERROR  remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp,		  ev_minxid, ev_maxxid, ev_xip,		  ev_type,		  ev_data1, ev_data2,		  ev_data3, ev_data4,		  ev_data5, ev_data6,		  ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno &#62; '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress</TT
+    ERROR  remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp,		  ev_minxid, ev_maxxid, ev_xip,		  ev_type,		  ev_data1, ev_data2,		  ev_data3, ev_data4,		  ev_data5, ev_data6,		  ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno &#62; '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ></DIV
 ><DIV
@@ -248,7 +266,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN978"
+NAME="AEN1070"
 ></A
 ><B
 >Q: I tried creating a CLUSTER NAME with a "-" in it.
@@ -275,7 +293,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN984"
+NAME="AEN1076"
 ></A
 ><B
 >Q:  slon does not restart after crash&#13;</B
@@ -296,10 +314,16 @@
 ><B
 >A: </B
 >It's handy to keep a slonik script like the following one around to
-run in such cases:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
+run in such cases:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >twcsds004[/opt/twcsds004/OXRS/slony-scripts]$ cat restart_org.slonik 
 cluster name = oxrsorg ;
 node 1 admin conninfo = 'host=32.85.68.220 dbname=oxrsorg user=postgres port=5532';
@@ -309,7 +333,10 @@
 restart node 1;
 restart node 2;
 restart node 3;
-restart node 4;</TT
+    restart node 4;</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 > <TT
@@ -326,7 +353,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN996"
+NAME="AEN1087"
 ></A
 ><B
 >Q: ps finds passwords on command line&#13;</B
@@ -359,7 +386,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1005"
+NAME="AEN1096"
 ></A
 ><B
 >Q: Slonik fails - cannot load PostgreSQL library - <TT
@@ -432,15 +459,26 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1022"
+NAME="AEN1113"
 ></A
 ><B
 >Q: Table indexes with FQ namespace names
 
-<TT
-CLASS="COMMAND"
->set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever', 
-	 comment = 'Table some_table in namespace nspace with a candidate primary key');</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    set add table (set id = 1, origin = 1, id = 27, 
+                   full qualified name = 'nspace.some_table', 
+                   key = 'key_on_whatever', 
+                   comment = 'Table some_table in namespace nspace with a candidate primary key');</PRE
+></TD
+></TR
+></TABLE
 >&#13;</B
 ></BIG
 ></P
@@ -468,18 +506,27 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1030"
+NAME="AEN1121"
 ></A
 ><B
 >Q: I'm trying to get a slave subscribed, and get the following
 messages in the logs:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >DEBUG1 copy_set 1
 DEBUG1 remoteWorkerThread_1: connected to provider DB
 WARN	remoteWorkerThread_1: transactions earlier than XID 127314958 are still in progress
-WARN	remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds</TT
+    WARN	remoteWorkerThread_1: data copy for set 1 failed - sleep 60 seconds</PRE
+></TD
+></TR
+></TABLE
 >&#13;</B
 ></BIG
 ></P
@@ -522,23 +569,41 @@
 transaction blocking Slony-I from processing the sync.  You might want
 to take a look at pg_locks to see what's up:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >sampledb=# select * from pg_locks where transaction is not null order by transaction;
  relation | database | transaction |	pid	|	  mode		| granted 
 ----------+----------+-------------+---------+---------------+---------
 			 |			 |	127314921 | 2605100 | ExclusiveLock | t
 			 |			 |	127326504 | 5660904 | ExclusiveLock | t
-(2 rows)</TT
+    (2 rows)</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >See?  127314921 is indeed older than 127314958, and it's still running.
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >$ ps -aef | egrep '[2]605100'
 
-postgres 2605100  205018	0 18:53:43  pts/3  3:13 postgres: postgres sampledb localhost COPY </TT
+    postgres 2605100  205018	0 18:53:43  pts/3  3:13 postgres: postgres sampledb localhost COPY </PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >This happens to be a <TT
@@ -565,7 +630,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1050"
+NAME="AEN1141"
 ></A
 ><B
 >Q: ERROR: duplicate key violates unique constraint "sl_table-pkey"&#13;</B
@@ -574,11 +639,20 @@
 ><P
 >I tried setting up a second replication set, and got the following error:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >stdin:9: Could not create subscription set 2 for oxrslive!
 stdin:11: PGRES_FATAL_ERROR select "_oxrslive".setAddTable(2, 1, 'public.replic_test', 'replic_test__Slony-I_oxrslive_rowID_key', 'Table public.replic_test without primary key');  - ERROR:  duplicate key violates unique constraint "sl_table-pkey"
-CONTEXT:  PL/pgSQL function "setaddtable_int" line 71 at SQL statement</TT
+    CONTEXT:  PL/pgSQL function "setaddtable_int" line 71 at SQL statement</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ></DIV
 ><DIV
@@ -603,7 +677,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1058"
+NAME="AEN1149"
 ></A
 ><B
 >Q: I need to drop a table from a replication set</B
@@ -632,11 +706,20 @@
 ><P
 > If you are still using 1.0.1 or 1.0.2, the _essential_ functionality of SET DROP TABLE involves the functionality in droptable_int().  You can fiddle this by hand by finding the table ID for the table you want to get rid of, which you can find in sl_table, and then run the following three queries, on each host:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="90%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >  select _slonyschema.alterTableRestore(40);
   select _slonyschema.tableDropKey(40);
-  delete from _slonyschema.sl_table where tab_id = 40;</TT
+      delete from _slonyschema.sl_table where tab_id = 40;</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >The schema will obviously depend on how you defined the Slony-I
@@ -659,7 +742,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1072"
+NAME="AEN1163"
 ></A
 ><B
 >Q: I need to drop a sequence from a replication set&#13;</B
@@ -698,23 +781,41 @@
 >seq_id</CODE
 > values.
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >oxrsorg=# select * from _oxrsorg.sl_sequence  where seq_id in (93,59);
  seq_id | seq_reloid | seq_set |				 seq_comment				 
 --------+------------+---------+-------------------------------------
 	  93 |  107451516 |		 1 | Sequence public.whois_cachemgmt_seq
 	  59 |  107451860 |		 1 | Sequence public.epp_whoi_cach_seq_
-(2 rows)</TT
+    (2 rows)</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >The data that needs to be deleted to stop Slony from continuing to
 replicate these are thus:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
-delete from _oxrsorg.sl_sequence where seq_id in (93,59);</TT
+    delete from _oxrsorg.sl_sequence where seq_id in (93,59);</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >Those two queries could be submitted to all of the nodes via
@@ -746,7 +847,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1095"
+NAME="AEN1186"
 ></A
 ><B
 >Q: Slony-I: cannot add table to currently subscribed set 1&#13;</B
@@ -755,9 +856,18 @@
 ><P
 > I tried to add a table to a set, and got the following message:
 
-<TT
-CLASS="COMMAND"
->	Slony-I: cannot add table to currently subscribed set 1</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    	Slony-I: cannot add table to currently subscribed set 1</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ></DIV
 ><DIV
@@ -784,7 +894,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1104"
+NAME="AEN1195"
 ></A
 ><B
 >Q: Some nodes start consistently falling behind&#13;</B
@@ -795,9 +905,18 @@
 system performance suffering.&#13;</P
 ><P
 >I'm seeing long running queries of the form:
-<TT
-CLASS="COMMAND"
->	fetch 100 from LOG;</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    	fetch 100 from LOG;</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ></DIV
 ><DIV
@@ -851,7 +970,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1122"
+NAME="AEN1213"
 ></A
 ><B
 >Q: I started doing a backup using pg_dump, and suddenly Slony stops&#13;</B
@@ -892,9 +1011,18 @@
 ><P
 >The initial query that will be blocked is thus:
 
-<TT
-CLASS="COMMAND"
->	 select "_slonyschema".createEvent('_slonyschema, 'SYNC', NULL);	  </TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    select "_slonyschema".createEvent('_slonyschema, 'SYNC', NULL);	  </PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >(You can see this in <CODE
@@ -916,11 +1044,20 @@
 >slony1_funcs.c</TT
 >, and is localized in the code that does:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >  LOCK TABLE %s.sl_event;
   INSERT INTO %s.sl_event (...stuff...)
-  SELECT currval('%s.sl_event_seq');</TT
+      SELECT currval('%s.sl_event_seq');</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >The <TT
@@ -971,7 +1108,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1160"
+NAME="AEN1251"
 ></A
 ><B
 >Q: The slons spent the weekend out of commission [for
@@ -1021,7 +1158,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1172"
+NAME="AEN1263"
 ></A
 ><B
 >Q: I pointed a subscribing node to a different parent and it stopped replicating&#13;</B
@@ -1081,8 +1218,14 @@
 had to go through node 2, and added in direct listens between nodes 1
 and 3.
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >cluster name = oxrslive;
  node 1 admin conninfo='host=32.85.68.220 dbname=oxrslive user=postgres port=5432';
  node 2 admin conninfo='host=32.85.68.216 dbname=oxrslive user=postgres port=5432';
@@ -1093,7 +1236,10 @@
 		store listen (origin = 3, receiver = 1, provider = 3);
 		drop listen (origin = 1, receiver = 3, provider = 2);
 		drop listen (origin = 3, receiver = 1, provider = 2);
-}</TT
+    }</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >Immediately after this script was run, <TT
@@ -1135,7 +1281,7 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1203"
+NAME="AEN1294"
 ></A
 ><B
 >Q: After dropping a node, sl_log_1 isn't getting purged out anymore.&#13;</B
@@ -1161,8 +1307,14 @@
 >Diagnosis: Run the following query to see if there are any
 "phantom/obsolete/blocking" sl_confirm entries:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >oxrsbar=# select * from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node);
  con_origin | con_received | con_seqno |		 con_timestamp		  
 ------------+--------------+-----------+----------------------------
@@ -1172,7 +1324,10 @@
 		  501 |				2 |		6577 | 2004-11-14 10:34:45.717003
 			 4 |				5 |	  83999 | 2004-11-14 21:11:11.111686
 			 4 |				3 |	  83999 | 2004-11-24 16:32:39.020194
-(6 rows)</TT
+    (6 rows)</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >In version 1.0.5, the "drop node" function purges out entries in
@@ -1180,22 +1335,43 @@
 be done manually.  Supposing the node number is 3, then the query
 would be:
 
-<TT
-CLASS="COMMAND"
->delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >Alternatively, to go after <SPAN
 CLASS="QUOTE"
 >"all phantoms,"</SPAN
 > you could use
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >oxrsbar=# delete from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node);
-DELETE 6</TT
+    DELETE 6</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->General "due diligance" dictates starting with a
+>General <SPAN
+CLASS="QUOTE"
+>"due diligance"</SPAN
+> dictates starting with a
 <TT
 CLASS="COMMAND"
 >BEGIN</TT
@@ -1231,18 +1407,25 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1226"
+NAME="AEN1318"
 ></A
 ><B
->Q: Replication Fails - Unique Constraint Violation
-&#13;</B
+>Q: Replication Fails - Unique Constraint Violation&#13;</B
 ></BIG
 ></P
 ><P
->Replication has been running for a while, successfully, when a node encounters a "glitch," and replication logs are filled with repetitions of the following:
+>Replication has been running for a while, successfully, when a
+node encounters a "glitch," and replication logs are filled with
+repetitions of the following:
 
-<TT
-CLASS="COMMAND"
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >DEBUG2 remoteWorkerThread_1: syncing set 2 with 5 table(s) from provider 1
 DEBUG2 remoteWorkerThread_1: syncing set 1 with 41 table(s) from provider 1
 DEBUG2 remoteWorkerThread_1: syncing set 5 with 1 table(s) from provider 1
@@ -1261,7 +1444,10 @@
 delete from only public.contact_status where _rserv_ts='18139332';insert into "_oxrsapp".sl_log_1	  (log_origin, log_xid, log_tableid,		log_actionseq, log_cmdtype,		log_cmddata) values	  ('1', '919151224', '24', '35090551', 'D', '_rserv_ts=''18139333''');
 delete from only public.contact_status where _rserv_ts='18139333';" ERROR:  duplicate key violates unique constraint "contact_status_pkey"
  - qualification was: 
-ERROR  remoteWorkerThread_1: SYNC aborted</TT
+    ERROR  remoteWorkerThread_1: SYNC aborted</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >The transaction rolls back, and Slony-I tries again, and again,
@@ -1314,7 +1500,8 @@
 ></LI
 ><LI
 ><P
-> The scenario seems to involve a delete transaction having been missed by Slony-I.  </P
+> The scenario seems to involve a delete transaction
+having been missed by Slony-I.&#13;</P
 ></LI
 ></UL
 >&#13;</P
@@ -1341,13 +1528,32 @@
 ><P
 ><BIG
 ><A
-NAME="AEN1247"
+NAME="AEN1339"
 ></A
 ><B
->Q:  If you have a slonik script something like this, it will hang on you and never complete, because you can't have "wait for event" inside a try block. A try block is executed as one transaction, so the event that your waiting for will never arrive.
-
+>Q:  If you have a slonik script something like this, it
+will hang on you and never complete, because you can't have
 <TT
 CLASS="COMMAND"
+>wait for event</TT
+> inside a <TT
+CLASS="COMMAND"
+>try</TT
+> block. A <TT
+CLASS="COMMAND"
+>try</TT
+>
+block is executed as one transaction, and the event that you are
+waiting for can never arrive inside the scope of the transaction.
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >try {
 		  echo 'Moving set 1 to node 3';
 		  lock set (id=1, origin=1);
@@ -1362,8 +1568,11 @@
 		  echo 'Could not move set for cluster foo';
 		  unlock set (id=1, origin=1);
 		  exit -1;
-}</TT
-></B
+    }</PRE
+></TD
+></TR
+></TABLE
+>&#13;</B
 ></BIG
 ></P
 ></DIV
@@ -1375,7 +1584,8 @@
 >  You must not invoke <TT
 CLASS="COMMAND"
 >wait for event</TT
-> inside a <SPAN
+> inside a
+<SPAN
 CLASS="QUOTE"
 >"try"</SPAN
 > block.&#13;</P
@@ -1433,7 +1643,7 @@
 ALIGN="left"
 VALIGN="top"
 ><A
-HREF="x931.html"
+HREF="help.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -1457,7 +1667,7 @@
 WIDTH="33%"
 ALIGN="left"
 VALIGN="top"
->Other Information Sources</TD
+>More Slony-I Help</TD
 ><TD
 WIDTH="34%"
 ALIGN="center"
--- doc/adminguide/slonconfig.sgml
+++ /dev/null
@@ -1,82 +0,0 @@
-<sect1 id="slonconfig"> <title/Slon Configuration Options/
-
-<para>Slon parameters:
-
-<screen>
-usage: slon [options] clustername conninfo
-
-Options:
--d debuglevel		 verbosity of logging (1..8)
--s milliseconds	  SYNC check interval (default 10000)
--t milliseconds	  SYNC interval timeout (default 60000)
--g num				  maximum SYNC group size (default 6)
--c num				  how often to vacuum in cleanup cycles
--p filename			slon pid file
--f filename			slon configuration file
-</screen>
-
-<itemizedlist>
-
-<listitem><para><option/-d/
-
-<para>The eight levels of logging are:
-<itemizedlist>
-<listitem><Para>Error
-<listitem><Para>Warn
-<listitem><Para>Config
-<listitem><Para>Info
-<listitem><Para>Debug1
-<listitem><Para>Debug2
-<listitem><Para>Debug3
-<listitem><Para>Debug4
-</itemizedlist>
-		  
-<listitem><para><option/-s/
-
-<para>A SYNC event will be sent at least this often, regardless of whether update activity is detected.
-
-<para>Short sync times keep the master on a "short leash," updating the slaves more frequently.  If you have replicated sequences that are frequently updated _without_ there being tables that are affected, this keeps there from being times when only sequences are updated, and therefore <emphasis/no/ syncs take place.
-
-<para>Longer sync times allow there to be fewer events, which allows somewhat better efficiency.
-
-<listitem><para><option/-t/
-
-<para>The time before the SYNC check interval times out.
-
-<listitem><para><option/-g/
-
-<para>Number of SYNC events to try to cram together.  The default is 6, which is probably suitable for small systems that can devote only very limited bits of memory to slon.  If you have plenty of memory, it would be reasonable to increase this, as it will increase the amount of work done in each transaction, and will allow a subscriber that is behind by a lot to catch up more quickly.
-
-<para>Slon processes usually stay pretty small; even with large value for this option, slon would be expected to only grow to a few MB in size.
-
-<listitem><para><option/-c/
-
-<para>How often to vacuum (<emphasis/e.g./ - how many cleanup cycles to run before vacuuming). 
-
-<para>Set this to zero to disable slon-initiated vacuuming.  If you are using something like <application/pg_autovacuum/ to initiate vacuums, you may not need for slon to initiate vacuums itself.  If you are not, there are some tables Slony-I uses that collect a <emphasis/lot/ of dead tuples that should be vacuumed frequently.
-
-<listitem><para><option/-p/
-
-<para> The location of the PID file for the slon process.
-
-<listitem><para><option/-f/
-
-<para>The location of the slon configuration file.
-</itemizedlist>
-
-<!-- Keep this comment at the end of the file
-Local variables:
-mode:sgml
-sgml-omittag:nil
-sgml-shorttag:t
-sgml-minimize-attributes:nil
-sgml-always-quote-attributes:t
-sgml-indent-step:1
-sgml-indent-data:t
-sgml-parent-document:nil
-sgml-default-dtd-file:"./reference.ced"
-sgml-exposed-tags:nil
-sgml-local-catalogs:("/usr/lib/sgml/catalog")
-sgml-local-ecat-files:nil
-End:
--->
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -75,37 +75,39 @@
 createlang plpgsql -h $MASTERHOST $MASTERDBNAME
 </programlisting>
 
-<para>Slony-I does not yet automatically copy table definitions from a master when a
-slave subscribes to it, so we need to import this data.  We do this with
-pg_dump.
+<para>Slony-I does not yet automatically copy table definitions from a
+master when a slave subscribes to it, so we need to import this data.
+We do this with <application/pg_dump/.
 
 <programlisting>
 pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME
 </programlisting>
 
-<para>To illustrate how Slony-I allows for on the fly replication subscription, lets
-start up pgbench.  If you run the pgbench application in the foreground of a
-separate terminal window, you can stop and restart it with different
-parameters at any time.  You'll need to re-export the variables again so they
-are available in this session as well.
+<para>To illustrate how Slony-I allows for on the fly replication
+subscription, let's start up <application/pgbench/.  If you run the
+<application/pgbench/ application in the foreground of a separate
+terminal window, you can stop and restart it with different parameters
+at any time.  You'll need to re-export the variables again so they are
+available in this session as well.
 
-<para>The typical command to run pgbench would look like:
+<para>The typical command to run <application/pgbench/ would look like:
 
 <programlisting>
 pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
 </programlisting>
 
-<para>This will run pgbench with 5 concurrent clients each processing 1000
-transactions against the pgbench database running on localhost as the pgbench
-user.
+<para>This will run <application/pgbench/ with 5 concurrent clients
+each processing 1000 transactions against the pgbench database running
+on localhost as the pgbench user.
 
 <sect2><title/ Configuring the Database for Replication./
 
-<para>Creating the configuration tables, stored procedures, triggers and
-configuration is all done through the slonik tool.  It is a specialized
-scripting aid that mostly calls stored procedures in the master/salve (node)
-databases.  The script to create the initial configuration for the simple
-master-slave setup of our pgbench database looks like this:
+<para>Creating the configuration tables, stored procedures, triggers
+and configuration is all done through the slonik tool.  It is a
+specialized scripting aid that mostly calls stored procedures in the
+master/slave (node) databases.  The script to create the initial
+configuration for the simple master-slave setup of our pgbench
+database looks like this:
 
 <programlisting>
 #!/bin/sh
--- /dev/null
+++ doc/adminguide/bookindex.sgml
@@ -0,0 +1,5 @@
+<index>
+
+<!-- This file was produced by collateindex.pl.         -->
+<!-- Remove this comment if you edit this file by hand! -->
+</index>
Index: help.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/help.html -Ldoc/adminguide/help.html -u -w -r1.1 -r1.2
--- doc/adminguide/help.html
+++ doc/adminguide/help.html
@@ -12,13 +12,13 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE="Replicating Your First Database"
 HREF="firstdb.html"><LINK
 REL="NEXT"
-TITLE=" Other Information Sources"
-HREF="x931.html"><LINK
+TITLE="Slony-I FAQ"
+HREF="faq.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
@@ -64,7 +64,7 @@
 ALIGN="right"
 VALIGN="bottom"
 ><A
-HREF="x931.html"
+HREF="faq.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -79,10 +79,11 @@
 CLASS="SECT1"
 ><A
 NAME="HELP"
->21. More Slony-I Help</A
+>13. More Slony-I Help</A
 ></H1
 ><P
 >If you are having problems with Slony-I, you have several options for help:
+
 <P
 ></P
 ><UL
@@ -92,19 +93,22 @@
 HREF="http://slony.info/"
 TARGET="_top"
 >http://slony.info/</A
-> - the official "home" of Slony&#13;</P
+> - the official
+"home" of Slony&#13;</P
 ></LI
 ><LI
 ><P
-> Documentation on the Slony-I Site- Check the documentation on the Slony website: <A
+> Documentation on the Slony-I Site- Check the
+documentation on the Slony website: <A
 HREF="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx"
 TARGET="_top"
 >Howto </A
-></P
+>&#13;</P
 ></LI
 ><LI
 ><P
-> Other Documentation - There are several articles here <A
+> Other Documentation - There are several articles here
+<A
 HREF="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication"
 TARGET="_top"
 > Varlena GeneralBits </A
@@ -112,11 +116,17 @@
 ></LI
 ><LI
 ><P
-> IRC - There are usually some people on #slony on irc.freenode.net who may be able to answer some of your questions. There is also a bot named "rtfm_please" that you may want to chat with.</P
+> IRC - There are usually some people on #slony on
+irc.freenode.net who may be able to answer some of your
+questions. There is also a bot named "rtfm_please" that you may want
+to chat with.&#13;</P
 ></LI
 ><LI
 ><P
-> Mailing lists - The answer to your problem may exist in the Slony1-general mailing list archives, or you may choose to ask your question on the Slony1-general mailing list. The mailing list archives, and instructions for joining the list may be found <A
+> Mailing lists - The answer to your problem may exist
+in the Slony1-general mailing list archives, or you may choose to ask
+your question on the Slony1-general mailing list. The mailing list
+archives, and instructions for joining the list may be found <A
 HREF="http://gborg.postgresql.org/mailman/listinfo/slony1"
 TARGET="_top"
 >here. </A
@@ -124,7 +134,8 @@
 ></LI
 ><LI
 ><P
-> If your Russian is much better than your English, then <A
+> If your Russian is much better than your English,
+then <A
 HREF="http://kirov.lug.ru/wiki/Slony"
 TARGET="_top"
 > KirovOpenSourceCommunity:  Slony</A
@@ -132,6 +143,29 @@
 ></LI
 ></UL
 >&#13;</P
+><DIV
+CLASS="SECT2"
+><H2
+CLASS="SECT2"
+><A
+NAME="AEN1017"
+>13.1. Other Information Sources</A
+></H2
+><P
+></P
+><UL
+><LI
+><P
+> <A
+HREF="http://comstar.dotgeek.org/postgres/slony-config/"
+TARGET="_top"
+>slony-config</A
+> - A Perl tool for configuring Slony nodes using
+config files in an XML-based format that the tool transforms into a
+Slonik script</P
+></LI
+></UL
+></DIV
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
@@ -167,7 +201,7 @@
 ALIGN="right"
 VALIGN="top"
 ><A
-HREF="x931.html"
+HREF="faq.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -183,7 +217,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
@@ -191,7 +225,7 @@
 WIDTH="33%"
 ALIGN="right"
 VALIGN="top"
->Other Information Sources</TD
+>Slony-I FAQ</TD
 ></TR
 ></TABLE
 ></DIV
Index: failover.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/failover.html -Ldoc/adminguide/failover.html -u -w -r1.1 -r1.2
--- doc/adminguide/failover.html
+++ doc/adminguide/failover.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE="Reshaping a Cluster"
 HREF="reshape.html"><LINK
@@ -79,48 +79,53 @@
 CLASS="SECT1"
 ><A
 NAME="FAILOVER"
->15. Doing switchover and failover with Slony-I</A
+>7. Doing switchover and failover with Slony-I</A
 ></H1
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN622"
->15.1. Foreword</A
+NAME="AEN700"
+>7.1. Foreword</A
 ></H2
 ><P
->	 Slony-I is an asynchronous replication system.  Because of that, it
-	 is almost certain that at the moment the current origin of a set
-	 fails, the last transactions committed have not propagated to the
-	 subscribers yet.  They always fail under heavy load, and you know
-	 it.  Thus the goal is to prevent the main server from failing.
-	 The best way to do that is frequent maintenance.</P
-><P
->	 Opening the case of a running server is not exactly what we
-	 all consider professional system maintenance.  And interestingly,
-	 those users who use replication for backup and failover
-	 purposes are usually the ones that have a very low tolerance for
-	 words like "downtime".  To meet these requirements, Slony-I has
-	 not only failover capabilities, but controlled master role transfer
-	 features too.&#13;</P
+> Slony-I is an asynchronous replication system.  Because of
+that, it is almost certain that at the moment the current origin of a
+set fails, the last transactions committed have not propagated to the
+subscribers yet.  They always fail under heavy load, and you know it.
+Thus the goal is to prevent the main server from failing.  The best
+way to do that is frequent maintenance.&#13;</P
+><P
+> Opening the case of a running server is not exactly what we all
+consider professional system maintenance.  And interestingly, those
+users who use replication for backup and failover purposes are usually
+the ones that have a very low tolerance for words like "downtime".  To
+meet these requirements, Slony-I has not only failover capabilities,
+but controlled master role transfer features too.&#13;</P
 ><P
 >	 It is assumed in this document that the reader is familiar with
-	 the slonik utility and knows at least how to set up a simple
-	 2 node replication system with Slony-I.&#13;</P
+the slonik utility and knows at least how to set up a simple 2 node
+replication system with Slony-I.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN627"
->15.2. Switchover</A
+NAME="AEN705"
+>7.2. Switchover</A
 ></H2
 ><P
->	 We assume a current "origin" as node1 (AKA master) with one 
-	 "subscriber" as node2 (AKA slave).  A web application on a third
-	 server is accessing the database on node1.  Both databases are
+> We assume a current <SPAN
+CLASS="QUOTE"
+>"origin"</SPAN
+> as node1 (AKA master) with
+one <SPAN
+CLASS="QUOTE"
+>"subscriber"</SPAN
+> as node2 (AKA slave).  A web application on a
+third server is accessing the database on node1.  Both databases are
 	 up and running and replication is more or less in sync.
 
 <P
@@ -128,139 +133,139 @@
 ><UL
 ><LI
 ><P
->  At the time of this writing switchover to another server requires the application to reconnect to the database.  So in order to avoid	 any complications, we simply shut down the web server.  Users who use pg_pool for the applications database connections can shutdown	  the pool only.
-
-
-		</P
+> At the time of this writing switchover to another
+server requires the application to reconnect to the database.  So in
+order to avoid any complications, we simply shut down the web server.
+Users who use <B
+CLASS="APPLICATION"
+>pg_pool</B
+> for the applications database
+connections merely have to shut down the pool.&#13;</P
 ></LI
 ><LI
 ><P
-> A small slonik script executes the following commands:</P
-><P
-><TT
-CLASS="COMMAND"
->	lock set (id = 1, origin = 1);
+> A small slonik script executes the following commands:
 
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="90%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    	lock set (id = 1, origin = 1);
 	wait for event (origin = 1, confirmed = 2);
-
 	move set (id = 1, old origin = 1, new origin = 2);
-
-	wait for event (origin = 1, confirmed = 2);&#13;</TT
+    	wait for event (origin = 1, confirmed = 2);</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->	 After these commands, the origin (master role) of data set 1
-	 is now on node2.  It is not simply transferred.  It is done
-	 in a fashion so that node1 is now a fully synchronized subscriber
-	 actively replicating the set.  So the two nodes completely switched
-	 roles.&#13;</P
+> After these commands, the origin (master role) of data set 1 is
+now on node2.  It is not simply transferred.  It is done in a fashion
+so that node1 is now a fully synchronized subscriber actively
+replicating the set.  So the two nodes completely switched roles.&#13;</P
 ></LI
 ><LI
 ><P
->	 After reconfiguring the web application (or pgpool) to connect to	 the database on node2 instead, the web server is restarted and	 resumes normal operation.&#13;</P
+> After reconfiguring the web application (or pgpool)
+to connect to the database on node2 instead, the web server is
+restarted and resumes normal operation.&#13;</P
 ><P
 >	 Done in one shell script, that does the shutdown, slonik, move
-	 config files and startup all together, this entire procedure
-	 takes less than 10 seconds.&#13;</P
+config files and startup all together, this entire procedure takes
+less than 10 seconds.&#13;</P
 ></LI
 ></UL
-></P
+>&#13;</P
 ><P
 >	 It is now possible to simply shutdown node1 and do whatever is
 	 required.  When node1 is restarted later, it will start replicating
-	 again and eventually catch up after a while.  At this point the
-	 whole procedure is executed with exchanged node IDs and the
-	 original configuration is restored.&#13;</P
+again and eventually catch up after a while.  At this point the whole
+procedure is executed with exchanged node IDs and the original
+configuration is restored.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN642"
->15.3. Failover</A
+NAME="AEN722"
+>7.3. Failover</A
 ></H2
 ><P
 >	 Because of the possibility of missing not-yet-replicated
-
-	 transactions that are committed, failover is the worst thing
-
-	 that can happen in a master-slave replication scenario.  If there
-
-	 is any possibility to bring back the failed server even if only
-
-	 for a few minutes, we strongly recommended that you follow the
-
-	 switchover procedure above.
-&#13;</P
-><P
->	 Slony does not provide any automatic detection for failed systems.
-
-	 Abandoning committed transactions is a business decision that
-
-	 cannot be made by a database.  If someone wants to put the
-
-	 commands below into a script executed automatically from the
-
-	 network monitoring system, well ... its your data.
+transactions that are committed, failover is the worst thing that can
+happen in a master-slave replication scenario.  If there is any
+possibility to bring back the failed server even if only for a few
+minutes, we strongly recommend that you follow the switchover
+procedure above.&#13;</P
+><P
+> Slony does not provide any automatic detection for failed
+systems.  Abandoning committed transactions is a business decision
+that cannot be made by a database.  If someone wants to put the
+commands below into a script executed automatically from the network
+monitoring system, well ... its your data.
 
 <P
 ></P
 ><UL
 ><LI
 ><P
->	The slonik command</P
-><P
-><TT
-CLASS="COMMAND"
->	failover (id = 1, backup node = 2);</TT
+>	The slonik command
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="90%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    	failover (id = 1, backup node = 2);</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >	 causes node2 to assume the ownership (origin) of all sets that
-
-	 have node1 as their current origin.  In the case there would be
-
-	 more nodes, All direct subscribers of node1 are instructed that
-
-	 this is happening.  Slonik would also query all direct subscribers
-
-	 to figure out which node has the highest replication status
-
-	 (latest committed transaction) for each set, and the configuration
-
-	 would be changed in a way that node2 first applies those last
-
-	 minute changes before actually allowing write access to the
-
-	 tables.
-&#13;</P
+have node1 as their current origin.  In the case there would be more
+nodes, All direct subscribers of node1 are instructed that this is
+happening.  Slonik would also query all direct subscribers to figure
+out which node has the highest replication status (latest committed
+transaction) for each set, and the configuration would be changed in a
+way that node2 first applies those last minute changes before actually
+allowing write access to the tables.&#13;</P
 ><P
 >	 In addition, all nodes that subscribed directly from node1 will
-
-	 now use node2 as data provider for the set.  This means that
-
-	 after the failover command succeeded, no node in the entire
-
-	 replication setup will receive anything from node1 any more.&#13;</P
+now use node2 as data provider for the set.  This means that after the
+failover command succeeded, no node in the entire replication setup
+will receive anything from node1 any more.  &#13;</P
 ></LI
 ><LI
 ><P
->&#13;	 Reconfigure and restart the application (or pgpool) to cause it
-
-	 to reconnect to node2.
-&#13;</P
+> Reconfigure and restart the application (or pgpool)
+to cause it to reconnect to node2.&#13;</P
 ></LI
 ><LI
 ><P
->	 After the failover is complete and node2 accepts write operations
-
-	 against the tables, remove all remnants of node1's configuration
+> After the failover is complete and node2 accepts
+write operations against the tables, remove all remnants of node1's
+configuration information with the slonik command
 
-	 information with the slonik command
-&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->	drop node (id = 1, event node = 2);</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="90%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    	drop node (id = 1, event node = 2);</PRE
+></TD
+></TR
+></TABLE
 ></P
 ></LI
 ></UL
@@ -271,15 +276,15 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN659"
->15.4. After failover, getting back node1</A
+NAME="AEN737"
+>7.4. After failover, getting back node1</A
 ></H2
 ><P
 >	 After the above failover, the data stored on node1 must be
 	 considered out of sync with the rest of the nodes.  Therefore, the
-	 only way to get node1 back and transfer the master role to it is
-	 to rebuild it from scratch as a slave, let it catch up and then
-	 follow the switchover procedure.
+only way to get node1 back and transfer the master role to it is to
+rebuild it from scratch as a slave, let it catch up and then follow
+the switchover procedure.
 
 
  </P
@@ -335,7 +340,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: addthings.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/addthings.html -Ldoc/adminguide/addthings.html -u -w -r1.1 -r1.2
--- doc/adminguide/addthings.html
+++ doc/adminguide/addthings.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE=" Slony Listen Paths"
 HREF="listenpaths.html"><LINK
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="ADDTHINGS"
->17. Adding Things to Replication</A
+>9. Adding Things to Replication</A
 ></H1
 ><P
 >You may discover that you have missed replicating things that
@@ -188,7 +188,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: concepts.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/concepts.html -Ldoc/adminguide/concepts.html -u -w -r1.1 -r1.2
--- doc/adminguide/concepts.html
+++ doc/adminguide/concepts.html
@@ -12,10 +12,10 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyintro.html"><LINK
 REL="PREVIOUS"
-TITLE="Slonik"
-HREF="slonik.html"><LINK
+TITLE=" Slony-I Installation"
+HREF="installation.html"><LINK
 REL="NEXT"
 TITLE="Defining Slony-I Clusters"
 HREF="cluster.html"><LINK
@@ -50,7 +50,7 @@
 ALIGN="left"
 VALIGN="bottom"
 ><A
-HREF="slonik.html"
+HREF="installation.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="CONCEPTS"
->5. Slony-I Concepts</A
+>4. Slony-I Concepts</A
 ></H1
 ><P
 >In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
@@ -113,35 +113,61 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN231"
->5.1. Cluster</A
+NAME="AEN212"
+>4.1. Cluster</A
 ></H2
 ><P
 >In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases.&#13;</P
 ><P
 >The cluster name is specified in each and every Slonik script via the directive:
-<TT
-CLASS="COMMAND"
->cluster name = 'something';</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    cluster name = 'something';</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->If the Cluster name is 'something', then Slony-I will create, in each database instance in the cluster, the namespace/schema '_something'.&#13;</P
+>If the Cluster name is <CODE
+CLASS="ENVAR"
+>something</CODE
+>, then Slony-I will
+create, in each database instance in the cluster, the namespace/schema
+<CODE
+CLASS="ENVAR"
+>_something</CODE
+>.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN237"
->5.2. Node</A
+NAME="AEN220"
+>4.2. Node</A
 ></H2
 ><P
 >A Slony-I Node is a named PostgreSQL database that will be participating in replication.  &#13;</P
 ><P
 >It is defined, near the beginning of each Slonik script, using the directive:
-<TT
-CLASS="COMMAND"
-> NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>     NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >The CONNINFO information indicates a string argument that will ultimately be passed to the <CODE
@@ -170,34 +196,49 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN250"
->5.3. Replication Set</A
+NAME="AEN233"
+>4.3. Replication Set</A
 ></H2
 ><P
->A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a Slony-I cluster.&#13;</P
+>A replication set is defined as a set of tables and sequences
+that are to be replicated between nodes in a Slony-I cluster.&#13;</P
 ><P
->You may have several sets, and the "flow" of replication does not need to be identical between those sets.&#13;</P
+>You may have several sets, and the <SPAN
+CLASS="QUOTE"
+>"flow"</SPAN
+> of replication does
+not need to be identical between those sets.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN254"
->5.4. Provider and Subscriber</A
+NAME="AEN238"
+>4.4. Provider and Subscriber</A
 ></H2
 ><P
->Each replication set has some "master" node, which winds up
-being the <SPAN
+>Each replication set has some <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> node, which
+winds up being the <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >only</I
 ></SPAN
-> place where user applications are permitted
-to modify data in the tables that are being replicated.  That "master"
-may be considered the master "provider node;" it is the main place
-from which data is provided.&#13;</P
+> place where user
+applications are permitted to modify data in the tables that are being
+replicated.  That <SPAN
+CLASS="QUOTE"
+>"master"</SPAN
+> may be considered the
+originating <SPAN
+CLASS="QUOTE"
+>"provider node;"</SPAN
+> it is the main place from
+which data is provided.&#13;</P
 ><P
 >Other nodes in the cluster will subscribe to the replication
 set, indicating that they want to receive the data.&#13;</P
@@ -205,10 +246,7 @@
 >The "master" node will never be considered a "subscriber."  But
 Slony-I supports the notion of cascaded subscriptions, that is, a node
 that is subscribed to the "master" may also behave as a "provider" to
-other nodes in the cluster.
-
-
- </P
+other nodes in the cluster.&#13;</P
 ></DIV
 ></DIV
 ><DIV
@@ -227,7 +265,7 @@
 ALIGN="left"
 VALIGN="top"
 ><A
-HREF="slonik.html"
+HREF="installation.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -255,13 +293,13 @@
 WIDTH="33%"
 ALIGN="left"
 VALIGN="top"
->Slonik</TD
+>Slony-I Installation</TD
 ><TD
 WIDTH="34%"
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.1 -r1.2
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -497,17 +497,17 @@
 be done manually.  Supposing the node number is 3, then the query
 would be:
 
-<command>
+<screen>
 delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;
-</command>
+</screen>
 
 <para>Alternatively, to go after <quote/all phantoms,/ you could use
-<command>
+<screen>
 oxrsbar=# delete from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node);
 DELETE 6
-</command>
+</screen>
 
-<para>General "due diligance" dictates starting with a
+<para>General <quote/due diligance/ dictates starting with a
 <command/BEGIN/, looking at the contents of sl_confirm before,
 ensuring that only the expected records are purged, and then, only
 after that, confirming the change with a <command/COMMIT/.  If you
--- /dev/null
+++ doc/adminguide/reference.sgml
@@ -0,0 +1,6 @@
+<reference id="slony-commands"> <title>Slony-I Commands</title>
+
+&slon;
+&slonik;
+
+</reference>
\ No newline at end of file
Index: dropthings.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/dropthings.html -Ldoc/adminguide/dropthings.html -u -w -r1.1 -r1.2
--- doc/adminguide/dropthings.html
+++ doc/adminguide/dropthings.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE=" Adding Things to Replication"
 HREF="addthings.html"><LINK
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="DROPTHINGS"
->18. Dropping things from Slony Replication</A
+>10. Dropping things from Slony Replication</A
 ></H1
 ><P
 >There are several things you might want to do involving dropping things from Slony-I replication.&#13;</P
@@ -88,8 +88,8 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN729"
->18.1. Dropping A Whole Node</A
+NAME="AEN817"
+>10.1. Dropping A Whole Node</A
 ></H2
 ><P
 >If you wish to drop an entire node from replication, the Slonik command DROP NODE should do the trick.  &#13;</P
@@ -109,8 +109,8 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN737"
->18.2. Dropping An Entire Set</A
+NAME="AEN825"
+>10.2. Dropping An Entire Set</A
 ></H2
 ><P
 >If you wish to stop replicating a particular replication set,
@@ -166,8 +166,8 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN750"
->18.3. Unsubscribing One Node From One Set</A
+NAME="AEN838"
+>10.3. Unsubscribing One Node From One Set</A
 ></H2
 ><P
 >The <TT
@@ -235,8 +235,8 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN762"
->18.4. Dropping A Table From A Set</A
+NAME="AEN850"
+>10.4. Dropping A Table From A Set</A
 ></H2
 ><P
 >In Slony 1.0.5 and above, there is a Slonik command <TT
@@ -253,13 +253,22 @@
 ><P
 >You can fiddle this by hand by finding the table ID for the
 table you want to get rid of, which you can find in sl_table, and then
-run the following three queries, on each host:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
+run the following three queries, on each host:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >  select _slonyschema.alterTableRestore(40);
   select _slonyschema.tableDropKey(40);
-  delete from _slonyschema.sl_table where tab_id = 40;</TT
+      delete from _slonyschema.sl_table where tab_id = 40;</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >The schema will obviously depend on how you defined the Slony-I
@@ -278,8 +287,8 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN773"
->18.5. Dropping A Sequence From A Set</A
+NAME="AEN860"
+>10.5. Dropping A Sequence From A Set</A
 ></H2
 ><P
 >Just as with <TT
@@ -296,13 +305,21 @@
 ><P
 >The data that needs to be deleted to stop Slony from continuing
 to replicate the two sequences identified with Sequence IDs 93 and 59
-are thus:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
+are thus:
 
-delete from _oxrsorg.sl_sequence where seq_id in (93,59);&#13;</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
+    delete from _oxrsorg.sl_sequence where seq_id in (93,59);</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 > Those two queries could be submitted to all of the nodes via
@@ -371,7 +388,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: installation.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/installation.html -Ldoc/adminguide/installation.html -u -w -r1.1 -r1.2
--- doc/adminguide/installation.html
+++ doc/adminguide/installation.html
@@ -12,13 +12,13 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyintro.html"><LINK
 REL="PREVIOUS"
 TITLE=" Requirements"
 HREF="requirements.html"><LINK
 REL="NEXT"
-TITLE="Slonik"
-HREF="slonik.html"><LINK
+TITLE="Slony-I Concepts"
+HREF="concepts.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
@@ -64,7 +64,7 @@
 ALIGN="right"
 VALIGN="bottom"
 ><A
-HREF="slonik.html"
+HREF="concepts.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -83,12 +83,19 @@
 ></H1
 ><P
 >You should have obtained the Slony-I source from the previous step. Unpack it.</P
-><P
-><TT
-CLASS="COMMAND"
+><TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >gunzip slony.tar.gz;
-tar xf slony.tar</TT
-></P
+    tar xf slony.tar</PRE
+></TD
+></TR
+></TABLE
 ><P
 > This will create a directory Slony-I under the current
 directory with the Slony-I sources.  Head into that that directory for
@@ -98,26 +105,31 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN176"
+NAME="AEN175"
 >3.1. Short Version</A
 ></H2
 ><P
-><TT
-CLASS="COMMAND"
->./configure --with-pgsourcetree=/whereever/the/source/is </TT
-></P
-><P
-> <TT
-CLASS="COMMAND"
-> gmake all; gmake install </TT
-></P
+><TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    ./configure --with-pgsourcetree=/whereever/the/source/is 
+    gmake all; gmake install </PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN182"
+NAME="AEN179"
 >3.2. Configuration</A
 ></H2
 ><P
@@ -131,14 +143,21 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN185"
+NAME="AEN182"
 >3.3. Example</A
 ></H2
-><P
-> <TT
-CLASS="COMMAND"
->./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.3</TT
-></P
+><TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    ./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.3</PRE
+></TD
+></TR
+></TABLE
 ><P
 >This script will run a number of tests to guess values for
 various dependent variables and try to detect some quirks of your
@@ -155,22 +174,36 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN191"
+NAME="AEN187"
 >3.4. Build</A
 ></H2
 ><P
 >To start the build process, type
 
-<TT
-CLASS="COMMAND"
->gmake all</TT
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    gmake all</PRE
+></TD
+></TR
+></TABLE
 ></P
 ><P
-> Be sure to use GNU make; on BSD systems, it is called gmake; on Linux, GNU make is typically the native "make", so the name of the command you type in may vary somewhat. The build may take anywhere from a few seconds to 2 minutes depending on how fast your hardware is at compiling things.  The last line displayed should be</P
+> Be sure to use GNU make; on BSD systems, it is called gmake; on
+Linux, GNU make is typically the native "make", so the name of the
+command you type in may vary somewhat. The build may take anywhere
+from a few seconds to 2 minutes depending on how fast your hardware is
+at compiling things.  The last line displayed should be</P
 ><P
 > <TT
 CLASS="COMMAND"
->All of Slony-I is successfully made.  Ready to install.</TT
+> All of Slony-I is successfully made.  Ready to
+install.  </TT
 ></P
 ></DIV
 ><DIV
@@ -178,7 +211,7 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN198"
+NAME="AEN194"
 >3.5. Installing Slony-I</A
 ></H2
 ><P
@@ -233,7 +266,7 @@
 ALIGN="right"
 VALIGN="top"
 ><A
-HREF="slonik.html"
+HREF="concepts.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -249,7 +282,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
@@ -257,7 +290,7 @@
 WIDTH="33%"
 ALIGN="right"
 VALIGN="top"
->Slonik</TD
+>Slony-I Concepts</TD
 ></TR
 ></TABLE
 ></DIV
Index: subscribenodes.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/subscribenodes.html -Ldoc/adminguide/subscribenodes.html -u -w -r1.1 -r1.2
--- doc/adminguide/subscribenodes.html
+++ doc/adminguide/subscribenodes.html
@@ -12,10 +12,10 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
-TITLE="Slon Configuration Options"
-HREF="slonconfig.html"><LINK
+TITLE="Slon daemons"
+HREF="slonstart.html"><LINK
 REL="NEXT"
 TITLE="Monitoring"
 HREF="monitoring.html"><LINK
@@ -50,7 +50,7 @@
 ALIGN="left"
 VALIGN="bottom"
 ><A
-HREF="slonconfig.html"
+HREF="slonstart.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -79,15 +79,33 @@
 CLASS="SECT1"
 ><A
 NAME="SUBSCRIBENODES"
->11. Subscribing Nodes</A
+>3. Subscribing Nodes</A
 ></H1
 ><P
->Before you subscribe a node to a set, be sure that you have slons running for both the master and the new subscribing node. If you don't have slons running, nothing will happen, and you'll beat your head against a wall trying to figure out what's going on.&#13;</P
+>Before you subscribe a node to a set, be sure that you have
+<B
+CLASS="APPLICATION"
+>slon</B
+>s running for both the master and the new
+subscribing node. If you don't have slons running, nothing will
+happen, and you'll beat your head against a wall trying to figure out
+what is going on.&#13;</P
 ><P
->Subscribing a node to a set is done by issuing the slonik command "subscribe set". It may seem tempting to try to subscribe several nodes to a set within the same try block like this:&#13;</P
-><P
-> <TT
+>Subscribing a node to a set is done by issuing the slonik
+command <TT
 CLASS="COMMAND"
+>subscribe set</TT
+>. It may seem tempting to try to
+subscribe several nodes to a set within a single try block like this:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >try {
 		  echo 'Subscribing sets';
 		  subscribe set (id = 1, provider=1, receiver=2, forward=yes);
@@ -96,10 +114,27 @@
 } on error {
 		  echo 'Could not subscribe the sets!';
 		  exit -1;
-}</TT
->&#13;</P
+    }</PRE
+></TD
+></TR
+></TABLE
+>
+&#13;</P
 ><P
-> You are just asking for trouble if you try to subscribe sets like that. The proper procedure is to subscribe one node at a time, and to check the logs and databases before you move onto subscribing the next node to the set. It is also worth noting that success within the above slonik try block does not imply that nodes 2, 3, and 4 have all been successfully subscribed. It merely guarantees that the slonik commands were received by the slon running on the master node.&#13;</P
+> You are just asking for trouble if you try to subscribe sets in
+that fashion. The proper procedure is to subscribe one node at a time,
+and to check the logs and databases before you move onto subscribing
+the next node to the set. It is also worth noting that the
+<SPAN
+CLASS="QUOTE"
+>"success"</SPAN
+> within the above slonik try block does not imply that
+nodes 2, 3, and 4 have all been successfully subscribed. It merely
+indicates that the slonik commands were successfully received by the
+<B
+CLASS="APPLICATION"
+>slon</B
+> running on the master node.&#13;</P
 ><P
 >A typical sort of problem that will arise is that a cascaded
 subscriber is looking for a provider that is not ready yet.  In that
@@ -110,38 +145,81 @@
 >never</I
 ></SPAN
 > pick up the
-subscriber.  It will get "stuck" waiting for a past event to take
-place.  The other nodes will be convinced that it is successfully
+subscriber.  It will get <SPAN
+CLASS="QUOTE"
+>"stuck"</SPAN
+> waiting for a past event to
+take place.  The other nodes will be convinced that it is successfully
 subscribed (because no error report ever made it back to them); a
-request to unsubscribe the node will be "blocked" because the node is
-stuck on the attempt to subscribe it.&#13;</P
+request to unsubscribe the node will be <SPAN
+CLASS="QUOTE"
+>"blocked"</SPAN
+> because the
+node is stuck on the attempt to subscribe it.&#13;</P
 ><P
->When you subscribe a node to a set, you should see something like this in your slony logs for the master node:&#13;</P
-><P
-> <TT
-CLASS="COMMAND"
->DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET</TT
+>When you subscribe a node to a set, you should see something
+like this in your slony logs for the master node:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->You should also start seeing log entries like this in the slony logs for the subscribing node:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->DEBUG2 remoteWorkerThread_1: copy table public.my_table</TT
+>You should also start seeing log entries like this in the slony logs for the subscribing node:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
+>    DEBUG2 remoteWorkerThread_1: copy table public.my_table</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->It may take some time for larger tables to be copied from the master node to the new subscriber. If you check the pg_stat_activity table on the master node, you should see a query that is copying the table to stdout.&#13;</P
-><P
->The table sl_subscribe on both the master, and the new subscriber should have entries for the new subscription:&#13;</P
-><P
-><TT
-CLASS="COMMAND"
+>It may take some time for larger tables to be copied from the
+master node to the new subscriber. If you check the pg_stat_activity
+table on the master node, you should see a query that is copying the
+table to stdout.&#13;</P
+><P
+>The table <CODE
+CLASS="ENVAR"
+>sl_subscribe</CODE
+> on both the master, and the new
+subscriber should contain entries for the new subscription:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 > sub_set | sub_provider | sub_receiver | sub_forward | sub_active
 ---------+--------------+--------------+-------------+------------
-	1	  |				1 |				2 | t			  | t</TT
+          1  |            1 |            2 |           t |         t</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->A final test is to insert a row into a table on the master node, and to see if the row is copied to the new subscriber. 
+>A final test is to insert a row into one of the replicated
+tables on the master node, and verify that the row is copied to the
+new subscriber.
 
 
 
@@ -163,7 +241,7 @@
 ALIGN="left"
 VALIGN="top"
 ><A
-HREF="slonconfig.html"
+HREF="slonstart.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -191,13 +269,13 @@
 WIDTH="33%"
 ALIGN="left"
 VALIGN="top"
->Slon Configuration Options</TD
+>Slon daemons</TD
 ><TD
 WIDTH="34%"
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: slony.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slony.html -Ldoc/adminguide/slony.html -u -w -r1.1 -r1.2
--- doc/adminguide/slony.html
+++ doc/adminguide/slony.html
@@ -9,7 +9,7 @@
 REV="MADE"
 HREF="mailto:cbbrowne at gmail.com"><LINK
 REL="NEXT"
-HREF="t24.html"><LINK
+HREF="slonyintro.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
@@ -117,46 +117,46 @@
 ></DT
 ><DT
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ></A
 ></DT
 ><DD
 ><DL
 ><DT
 >1. <A
-HREF="t24.html#INTRODUCTION"
+HREF="slonyintro.html#INTRODUCTION"
 >Introduction to Slony-I</A
 ></DT
 ><DD
 ><DL
 ><DT
 >1.1. <A
-HREF="t24.html#AEN28"
+HREF="slonyintro.html#AEN28"
 >Why yet another replication system?</A
 ></DT
 ><DT
 >1.2. <A
-HREF="t24.html#AEN31"
+HREF="slonyintro.html#AEN31"
 >What Slony-I is</A
 ></DT
 ><DT
 >1.3. <A
-HREF="t24.html#AEN42"
+HREF="slonyintro.html#AEN42"
 >Slony-I is not</A
 ></DT
 ><DT
 >1.4. <A
-HREF="t24.html#AEN48"
+HREF="slonyintro.html#AEN48"
 >Why doesn't Slony-I do automatic fail-over/promotion?</A
 ></DT
 ><DT
 >1.5. <A
-HREF="t24.html#AEN53"
+HREF="slonyintro.html#AEN53"
 >Current Limitations</A
 ></DT
 ><DT
 >1.6. <A
-HREF="t24.html#SLONYLISTENERCOSTS"
+HREF="slonyintro.html#SLONYLISTENERCOSTS"
 >Slony-I Communications
 Costs</A
 ></DT
@@ -200,432 +200,455 @@
 ><DL
 ><DT
 >3.1. <A
-HREF="installation.html#AEN176"
+HREF="installation.html#AEN175"
 >Short Version</A
 ></DT
 ><DT
 >3.2. <A
-HREF="installation.html#AEN182"
+HREF="installation.html#AEN179"
 >Configuration</A
 ></DT
 ><DT
 >3.3. <A
-HREF="installation.html#AEN185"
+HREF="installation.html#AEN182"
 >Example</A
 ></DT
 ><DT
 >3.4. <A
-HREF="installation.html#AEN191"
+HREF="installation.html#AEN187"
 >Build</A
 ></DT
 ><DT
 >3.5. <A
-HREF="installation.html#AEN198"
+HREF="installation.html#AEN194"
 >Installing Slony-I</A
 ></DT
 ></DL
 ></DD
 ><DT
 >4. <A
-HREF="slonik.html"
->Slonik</A
-></DT
-><DD
-><DL
-><DT
->4.1. <A
-HREF="slonik.html#AEN206"
->Introduction</A
-></DT
-><DT
->4.2. <A
-HREF="slonik.html#AEN209"
->General outline</A
-></DT
-></DL
-></DD
-><DT
->5. <A
 HREF="concepts.html"
 >Slony-I Concepts</A
 ></DT
 ><DD
 ><DL
 ><DT
->5.1. <A
-HREF="concepts.html#AEN231"
+>4.1. <A
+HREF="concepts.html#AEN212"
 >Cluster</A
 ></DT
 ><DT
->5.2. <A
-HREF="concepts.html#AEN237"
+>4.2. <A
+HREF="concepts.html#AEN220"
 >Node</A
 ></DT
 ><DT
->5.3. <A
-HREF="concepts.html#AEN250"
+>4.3. <A
+HREF="concepts.html#AEN233"
 >Replication Set</A
 ></DT
 ><DT
->5.4. <A
-HREF="concepts.html#AEN254"
+>4.4. <A
+HREF="concepts.html#AEN238"
 >Provider and Subscriber</A
 ></DT
 ></DL
 ></DD
 ><DT
->6. <A
+>5. <A
 HREF="cluster.html"
 >Defining Slony-I Clusters</A
 ></DT
 ><DT
->7. <A
-HREF="x267.html"
->Defining Slony-I Replication Sets</A
+>6. <A
+HREF="definingsets.html"
+>Defining Slony-I Replication
+Sets</A
 ></DT
 ><DD
 ><DL
 ><DT
->7.1. <A
-HREF="x267.html#AEN278"
+>6.1. <A
+HREF="definingsets.html#AEN266"
 >Primary Keys</A
 ></DT
 ><DT
->7.2. <A
-HREF="x267.html#AEN303"
+>6.2. <A
+HREF="definingsets.html#AEN290"
 >Grouping tables into sets</A
 ></DT
 ></DL
 ></DD
+></DL
+></DD
 ><DT
->8. <A
-HREF="altperl.html"
+>I. <A
+HREF="slony-commands.html"
+>Slony-I Commands</A
+></DT
+><DD
+><DL
+><DT
+><A
+HREF="slon.html"
+><B
+CLASS="APPLICATION"
+>slon</B
+></A
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> daemon
+    </DT
+><DT
+><A
+HREF="slonik.html"
+><B
+CLASS="APPLICATION"
+>slonik</B
+></A
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> command processor
+    </DT
+></DL
+></DD
+><DT
+><A
+HREF="slonyadmin.html"
+></A
+></DT
+><DD
+><DL
+><DT
+>1. <A
+HREF="slonyadmin.html#ALTPERL"
 >Slony-I Administration Scripts</A
 ></DT
 ><DD
 ><DL
 ><DT
->8.1. <A
-HREF="altperl.html#AEN314"
+>1.1. <A
+HREF="slonyadmin.html#AEN454"
 >Node/Cluster Configuration - cluster.nodes</A
 ></DT
 ><DT
->8.2. <A
-HREF="altperl.html#AEN342"
+>1.2. <A
+HREF="slonyadmin.html#AEN479"
 >Set configuration - cluster.set1, cluster.set2</A
 ></DT
 ><DT
->8.3. <A
-HREF="altperl.html#AEN362"
+>1.3. <A
+HREF="slonyadmin.html#AEN499"
 >build_env.pl</A
 ></DT
 ><DT
->8.4. <A
-HREF="altperl.html#AEN375"
+>1.4. <A
+HREF="slonyadmin.html#AEN512"
 >create_set.pl</A
 ></DT
 ><DT
->8.5. <A
-HREF="altperl.html#AEN380"
+>1.5. <A
+HREF="slonyadmin.html#AEN517"
 >drop_node.pl</A
 ></DT
 ><DT
->8.6. <A
-HREF="altperl.html#AEN383"
+>1.6. <A
+HREF="slonyadmin.html#AEN520"
 >drop_set.pl</A
 ></DT
 ><DT
->8.7. <A
-HREF="altperl.html#AEN387"
+>1.7. <A
+HREF="slonyadmin.html#AEN524"
 >failover.pl</A
 ></DT
 ><DT
->8.8. <A
-HREF="altperl.html#AEN390"
+>1.8. <A
+HREF="slonyadmin.html#AEN527"
 >init_cluster.pl</A
 ></DT
 ><DT
->8.9. <A
-HREF="altperl.html#AEN393"
+>1.9. <A
+HREF="slonyadmin.html#AEN530"
 >merge_sets.pl</A
 ></DT
 ><DT
->8.10. <A
-HREF="altperl.html#AEN396"
+>1.10. <A
+HREF="slonyadmin.html#AEN533"
 >move_set.pl</A
 ></DT
 ><DT
->8.11. <A
-HREF="altperl.html#AEN399"
+>1.11. <A
+HREF="slonyadmin.html#AEN536"
 >replication_test.pl</A
 ></DT
 ><DT
->8.12. <A
-HREF="altperl.html#AEN402"
+>1.12. <A
+HREF="slonyadmin.html#AEN539"
 >restart_node.pl</A
 ></DT
 ><DT
->8.13. <A
-HREF="altperl.html#AEN405"
+>1.13. <A
+HREF="slonyadmin.html#AEN542"
 >restart_nodes.pl</A
 ></DT
 ><DT
->8.14. <A
-HREF="altperl.html#AEN408"
+>1.14. <A
+HREF="slonyadmin.html#AEN545"
 >show_configuration.pl</A
 ></DT
 ><DT
->8.15. <A
-HREF="altperl.html#AEN412"
+>1.15. <A
+HREF="slonyadmin.html#AEN549"
 >slon_kill.pl</A
 ></DT
 ><DT
->8.16. <A
-HREF="altperl.html#AEN415"
+>1.16. <A
+HREF="slonyadmin.html#AEN552"
 >slon_pushsql.pl</A
 ></DT
 ><DT
->8.17. <A
-HREF="altperl.html#AEN418"
+>1.17. <A
+HREF="slonyadmin.html#AEN555"
 >slon_start.pl</A
 ></DT
 ><DT
->8.18. <A
-HREF="altperl.html#AEN421"
+>1.18. <A
+HREF="slonyadmin.html#AEN558"
 >slon_watchdog.pl</A
 ></DT
 ><DT
->8.19. <A
-HREF="altperl.html#AEN424"
+>1.19. <A
+HREF="slonyadmin.html#AEN561"
 >slon_watchdog2.pl</A
 ></DT
 ><DT
->8.20. <A
-HREF="altperl.html#AEN428"
+>1.20. <A
+HREF="slonyadmin.html#AEN565"
 >subscribe_set.pl</A
 ></DT
 ><DT
->8.21. <A
-HREF="altperl.html#AEN431"
+>1.21. <A
+HREF="slonyadmin.html#AEN568"
 >uninstall_nodes.pl</A
 ></DT
 ><DT
->8.22. <A
-HREF="altperl.html#AEN434"
+>1.22. <A
+HREF="slonyadmin.html#AEN571"
 >unsubscribe_set.pl</A
 ></DT
 ><DT
->8.23. <A
-HREF="altperl.html#AEN437"
+>1.23. <A
+HREF="slonyadmin.html#AEN574"
 >update_nodes.pl</A
 ></DT
 ></DL
 ></DD
 ><DT
->9. <A
+>2. <A
 HREF="slonstart.html"
 >Slon daemons</A
 ></DT
 ><DT
->10. <A
-HREF="slonconfig.html"
->Slon Configuration Options</A
-></DT
-><DT
->11. <A
+>3. <A
 HREF="subscribenodes.html"
 >Subscribing Nodes</A
 ></DT
 ><DT
->12. <A
+>4. <A
 HREF="monitoring.html"
 >Monitoring</A
 ></DT
 ><DD
 ><DL
 ><DT
->12.1. <A
-HREF="monitoring.html#AEN565"
+>4.1. <A
+HREF="monitoring.html#AEN645"
 >CONFIG notices</A
 ></DT
 ><DT
->12.2. <A
-HREF="monitoring.html#AEN571"
+>4.2. <A
+HREF="monitoring.html#AEN650"
 >DEBUG Notices</A
 ></DT
 ></DL
 ></DD
 ><DT
->13. <A
+>5. <A
 HREF="maintenance.html"
 >Slony-I Maintenance</A
 ></DT
 ><DD
 ><DL
 ><DT
->13.1. <A
-HREF="maintenance.html#AEN587"
+>5.1. <A
+HREF="maintenance.html#AEN665"
 >Watchdogs: Keeping Slons Running</A
 ></DT
 ><DT
->13.2. <A
-HREF="maintenance.html#AEN591"
+>5.2. <A
+HREF="maintenance.html#AEN669"
 >Alternative to Watchdog: generate_syncs.sh</A
 ></DT
 ><DT
->13.3. <A
-HREF="maintenance.html#AEN600"
+>5.3. <A
+HREF="maintenance.html#AEN678"
 >Log Files</A
 ></DT
 ></DL
 ></DD
 ><DT
->14. <A
+>6. <A
 HREF="reshape.html"
 >Reshaping a Cluster</A
 ></DT
 ><DT
->15. <A
+>7. <A
 HREF="failover.html"
 >Doing switchover and failover with Slony-I</A
 ></DT
 ><DD
 ><DL
 ><DT
->15.1. <A
-HREF="failover.html#AEN622"
+>7.1. <A
+HREF="failover.html#AEN700"
 >Foreword</A
 ></DT
 ><DT
->15.2. <A
-HREF="failover.html#AEN627"
+>7.2. <A
+HREF="failover.html#AEN705"
 >Switchover</A
 ></DT
 ><DT
->15.3. <A
-HREF="failover.html#AEN642"
+>7.3. <A
+HREF="failover.html#AEN722"
 >Failover</A
 ></DT
 ><DT
->15.4. <A
-HREF="failover.html#AEN659"
+>7.4. <A
+HREF="failover.html#AEN737"
 >After failover, getting back node1</A
 ></DT
 ></DL
 ></DD
 ><DT
->16. <A
+>8. <A
 HREF="listenpaths.html"
 >Slony Listen Paths</A
 ></DT
 ><DD
 ><DL
 ><DT
->16.1. <A
-HREF="listenpaths.html#AEN666"
+>8.1. <A
+HREF="listenpaths.html#AEN744"
 >How Listening Can Break</A
 ></DT
 ><DT
->16.2. <A
-HREF="listenpaths.html#AEN672"
+>8.2. <A
+HREF="listenpaths.html#AEN753"
 >How The Listen Configuration Should Look</A
 ></DT
 ><DT
->16.3. <A
-HREF="listenpaths.html#AEN697"
+>8.3. <A
+HREF="listenpaths.html#AEN783"
 >Open Question</A
 ></DT
 ><DT
->16.4. <A
-HREF="listenpaths.html#AEN700"
+>8.4. <A
+HREF="listenpaths.html#AEN788"
 >Generating listener entries via heuristics</A
 ></DT
 ></DL
 ></DD
 ><DT
->17. <A
+>9. <A
 HREF="addthings.html"
 >Adding Things to Replication</A
 ></DT
 ><DT
->18. <A
+>10. <A
 HREF="dropthings.html"
 >Dropping things from Slony Replication</A
 ></DT
 ><DD
 ><DL
 ><DT
->18.1. <A
-HREF="dropthings.html#AEN729"
+>10.1. <A
+HREF="dropthings.html#AEN817"
 >Dropping A Whole Node</A
 ></DT
 ><DT
->18.2. <A
-HREF="dropthings.html#AEN737"
+>10.2. <A
+HREF="dropthings.html#AEN825"
 >Dropping An Entire Set</A
 ></DT
 ><DT
->18.3. <A
-HREF="dropthings.html#AEN750"
+>10.3. <A
+HREF="dropthings.html#AEN838"
 >Unsubscribing One Node From One Set</A
 ></DT
 ><DT
->18.4. <A
-HREF="dropthings.html#AEN762"
+>10.4. <A
+HREF="dropthings.html#AEN850"
 >Dropping A Table From A Set</A
 ></DT
 ><DT
->18.5. <A
-HREF="dropthings.html#AEN773"
+>10.5. <A
+HREF="dropthings.html#AEN860"
 >Dropping A Sequence From A Set</A
 ></DT
 ></DL
 ></DD
 ><DT
->19. <A
+>11. <A
 HREF="ddlchanges.html"
 >Database Schema Changes (DDL)</A
 ></DT
 ><DT
->20. <A
+>12. <A
 HREF="firstdb.html"
 >Replicating Your First Database</A
 ></DT
 ><DD
 ><DL
 ><DT
->20.1. <A
-HREF="firstdb.html#AEN867"
+>12.1. <A
+HREF="firstdb.html#AEN953"
 >Creating the pgbenchuser</A
 ></DT
 ><DT
->20.2. <A
-HREF="firstdb.html#AEN871"
+>12.2. <A
+HREF="firstdb.html#AEN957"
 >Preparing the databases</A
 ></DT
 ><DT
->20.3. <A
-HREF="firstdb.html#AEN885"
+>12.3. <A
+HREF="firstdb.html#AEN973"
 >Configuring the Database for Replication.</A
 ></DT
 ></DL
 ></DD
 ><DT
->21. <A
+>13. <A
 HREF="help.html"
 >More Slony-I Help</A
 ></DT
+><DD
+><DL
 ><DT
->22. <A
-HREF="x931.html"
+>13.1. <A
+HREF="help.html#AEN1017"
 >Other Information Sources</A
 ></DT
 ></DL
 ></DD
+></DL
+></DD
 ><DT
 ><A
 HREF="faq.html"
-></A
+>Slony-I FAQ</A
 ></DT
 ></DL
 ></DIV
@@ -656,7 +679,7 @@
 ALIGN="right"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
Index: slonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/slonik.sgml -Ldoc/adminguide/slonik.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/slonik.sgml
+++ doc/adminguide/slonik.sgml
@@ -1,14 +1,43 @@
-<sect1 id="slonik"> <title>Slonik</title>
-
-<sect2><title/Introduction/
-
-<para>Slonik is a command line utility designed specifically to setup
-and modify configurations of the Slony-I replication system.</para>
-
-<sect2><title/General outline/
-
-<para>The slonik command line utility is supposed to be used embedded
-into shell scripts and reads commands from files or stdin.</para>
+<refentry id="slonik">
+<refmeta>
+    <refentrytitle id="app-slonik-title"><application>slonik</application></refentrytitle>
+    <manvolnum>1</manvolnum>
+    <refmiscinfo>Application</refmiscinfo>
+  </refmeta>
+
+  <refnamediv>
+    <refname><application>slonik</application></refname>
+    <refpurpose>
+      <productname>Slony-I</productname> command processor
+    </refpurpose>
+  </refnamediv>
+
+ <indexterm zone="slonik">
+  <primary>slonik</primary>
+ </indexterm>
+
+ <refsynopsisdiv>
+  <cmdsynopsis>
+   <command>slonik</command>
+   <arg><replaceable class="parameter">filename</replaceable>
+  </cmdsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+  <title>Description</title>
+
+    <para>
+     <application>slonik</application> is the command processor
+     application that is used to set up and modify configurations of
+     <productname>Slony-I</productname> replication clusters.
+    </para>
+ </refsect1>
+
+ <refsect1><title> Outline</title>
+
+  <para>The slonik command line utility is supposed to be used
+  embedded into shell scripts and reads commands from files or
+  stdin.</para>
 
 <para>It reads a set of Slonik statements, which are written in a
 scripting language with syntax similar to that of SQL, and performs
@@ -16,27 +45,41 @@
 script.</para>
 
 <para>Nearly all of the real configuration work is actually done by
-calling stored procedures after loading the Slony-I support base into
-a database.  Slonik was created because these stored procedures have
-special requirements as to on which particular node in the replication
-system they are called.  The absence of named parameters for stored
-procedures makes it rather hard to do this from the psql prompt, and
-psql lacks the ability to maintain multiple connections with open
-transactions to multiple databases.</para>
-
-<para>The format of the Slonik "language" is very similar to that of
-SQL, and the parser is based on a similar set of formatting rules for
-such things as numbers and strings.  Note that slonik is declarative,
-using literal values throughout, and does <emphasis>not</emphasis> have the
-notion of variables.  It is anticipated that Slonik scripts will
-typically be <emphasis>generated</emphasis> by scripts, such as Bash or Perl,
-and these sorts of scripting languages already have perfectly good
-ways of managing variables.</para>
+  calling stored procedures after loading the Slony-I support base
+  into a database.  Slonik was created because these stored procedures
+  have special requirements as to on which particular node in the
+  replication system they are called.  The absence of named parameters
+  for stored procedures makes it rather hard to do this from the psql
+  prompt, and psql lacks the ability to maintain multiple connections
+  with open transactions to multiple databases.</para>
+
+  <para>The format of the Slonik <quote/language/ is very similar to
+  that of SQL, and the parser is based on a similar set of formatting
+  rules for such things as numbers and strings.  Note that slonik is
+  declarative, using literal values throughout, and does
+  <emphasis>not</emphasis> have the notion of variables.  It is
+  anticipated that Slonik scripts will typically be
+  <emphasis>generated</emphasis> by scripts, such as Bash or Perl, and
+  these sorts of scripting languages already have perfectly good ways
+  of managing variables.</para>
 
 <para>A detailed list of Slonik commands can be found here: <ulink
 url="http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands">
 slonik commands </ulink></para>
 
+ </refsect1>
+
+ <refsect1>
+  <title>Exit Status</title>
+
+  <para>
+   <application>slonik</application> returns 0 to the shell if it
+   finished normally.  Scripts may specify return codes.   
+  </para>
+ </refsect1>
+</refentry>
+
+
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
@@ -46,10 +89,9 @@
 sgml-always-quote-attributes:t
 sgml-indent-step:1
 sgml-indent-data:t
-sgml-parent-document:slony.sgml
-sgml-default-dtd-file:"./reference.ced"
+sgml-parent-document:"slony.sgml"
 sgml-exposed-tags:nil
-sgml-local-catalogs:("/usr/lib/sgml/catalog")
+sgml-local-catalogs:"/usr/lib/sgml/catalog"
 sgml-local-ecat-files:nil
 End:
 -->
Index: ddlchanges.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/ddlchanges.html -Ldoc/adminguide/ddlchanges.html -u -w -r1.1 -r1.2
--- doc/adminguide/ddlchanges.html
+++ doc/adminguide/ddlchanges.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE=" Dropping things from Slony Replication"
 HREF="dropthings.html"><LINK
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="DDLCHANGES"
->19. Database Schema Changes (DDL)</A
+>11. Database Schema Changes (DDL)</A
 ></H1
 ><P
 >When changes are made to the database schema, <SPAN
@@ -256,7 +256,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: cluster.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/cluster.html -Ldoc/adminguide/cluster.html -u -w -r1.1 -r1.2
--- doc/adminguide/cluster.html
+++ doc/adminguide/cluster.html
@@ -12,13 +12,14 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyintro.html"><LINK
 REL="PREVIOUS"
 TITLE="Slony-I Concepts"
 HREF="concepts.html"><LINK
 REL="NEXT"
-TITLE="Defining Slony-I Replication Sets"
-HREF="x267.html"><LINK
+TITLE="Defining Slony-I Replication
+Sets"
+HREF="definingsets.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
@@ -64,7 +65,7 @@
 ALIGN="right"
 VALIGN="bottom"
 ><A
-HREF="x267.html"
+HREF="definingsets.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -79,7 +80,7 @@
 CLASS="SECT1"
 ><A
 NAME="CLUSTER"
->6. Defining Slony-I Clusters</A
+>5. Defining Slony-I Clusters</A
 ></H1
 ><P
 >A Slony-I cluster is the basic grouping of database instances in
@@ -93,15 +94,16 @@
 >For a simple install, it may be reasonable for the "master" to
 be node #1, and for the "slave" to be node #2.&#13;</P
 ><P
->Some planning should be done, in more complex cases, to ensure that the numbering system is kept sane, lest the administrators be driven insane.  The node numbers should be chosen to somehow correspond to the shape of the environment, as opposed to (say) the order in which nodes were initialized.&#13;</P
+>Some planning should be done, in more complex cases, to ensure
+that the numbering system is kept sane, lest the administrators be
+driven insane.  The node numbers should be chosen to somehow
+correspond to the shape of the environment, as opposed to (say) the
+order in which nodes were initialized.&#13;</P
 ><P
 >It may be that in version 1.1, nodes will also have a "name"
 attribute, so that they may be given more mnemonic names.  In that
 case, the node numbers can be cryptic; it will be the node name that
-is used to organize the cluster.
-
-
- </P
+is used to organize the cluster.</P
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
@@ -137,7 +139,7 @@
 ALIGN="right"
 VALIGN="top"
 ><A
-HREF="x267.html"
+HREF="definingsets.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -153,7 +155,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyintro.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
@@ -161,7 +163,8 @@
 WIDTH="33%"
 ALIGN="right"
 VALIGN="top"
->Defining Slony-I Replication Sets</TD
+>Defining Slony-I Replication
+Sets</TD
 ></TR
 ></TABLE
 ></DIV
Index: slonstart.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonstart.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slonstart.html -Ldoc/adminguide/slonstart.html -u -w -r1.1 -r1.2
--- doc/adminguide/slonstart.html
+++ doc/adminguide/slonstart.html
@@ -12,13 +12,12 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
-TITLE=" Slony-I Administration Scripts"
-HREF="altperl.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="NEXT"
-TITLE="Slon Configuration Options"
-HREF="slonconfig.html"><LINK
+TITLE=" Subscribing Nodes"
+HREF="subscribenodes.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
@@ -50,7 +49,7 @@
 ALIGN="left"
 VALIGN="bottom"
 ><A
-HREF="altperl.html"
+HREF="slonyadmin.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -64,7 +63,7 @@
 ALIGN="right"
 VALIGN="bottom"
 ><A
-HREF="slonconfig.html"
+HREF="subscribenodes.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -79,7 +78,7 @@
 CLASS="SECT1"
 ><A
 NAME="SLONSTART"
->9. Slon daemons</A
+>2. Slon daemons</A
 ></H1
 ><P
 >The programs that actually perform Slony-I replication are the
@@ -257,7 +256,7 @@
 ALIGN="left"
 VALIGN="top"
 ><A
-HREF="altperl.html"
+HREF="slonyadmin.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -275,7 +274,7 @@
 ALIGN="right"
 VALIGN="top"
 ><A
-HREF="slonconfig.html"
+HREF="subscribenodes.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -285,13 +284,13 @@
 WIDTH="33%"
 ALIGN="left"
 VALIGN="top"
->Slony-I Administration Scripts</TD
+></TD
 ><TD
 WIDTH="34%"
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
@@ -299,7 +298,7 @@
 WIDTH="33%"
 ALIGN="right"
 VALIGN="top"
->Slon Configuration Options</TD
+>Subscribing Nodes</TD
 ></TR
 ></TABLE
 ></DIV
Index: slonik.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/slonik.html -Ldoc/adminguide/slonik.html -u -w -r1.1 -r1.2
--- doc/adminguide/slonik.html
+++ doc/adminguide/slonik.html
@@ -2,7 +2,7 @@
 <HTML
 ><HEAD
 ><TITLE
->Slonik</TITLE
+>slonik</TITLE
 ><META
 NAME="GENERATOR"
 CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
@@ -12,19 +12,19 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+TITLE="Slony-I Commands"
+HREF="slony-commands.html"><LINK
 REL="PREVIOUS"
-TITLE=" Slony-I Installation"
-HREF="installation.html"><LINK
+TITLE="slon"
+HREF="slon.html"><LINK
 REL="NEXT"
-TITLE="Slony-I Concepts"
-HREF="concepts.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="STYLESHEET"
 TYPE="text/css"
 HREF="stdstyle.css"><META
 HTTP-EQUIV="Content-Type"></HEAD
 ><BODY
-CLASS="SECT1"
+CLASS="REFENTRY"
 BGCOLOR="#FFFFFF"
 TEXT="#000000"
 LINK="#0000FF"
@@ -50,7 +50,7 @@
 ALIGN="left"
 VALIGN="bottom"
 ><A
-HREF="installation.html"
+HREF="slon.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -64,7 +64,7 @@
 ALIGN="right"
 VALIGN="bottom"
 ><A
-HREF="concepts.html"
+HREF="slonyadmin.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -73,37 +73,78 @@
 ><HR
 ALIGN="LEFT"
 WIDTH="100%"></DIV
-><DIV
-CLASS="SECT1"
 ><H1
-CLASS="SECT1"
 ><A
 NAME="SLONIK"
->4. Slonik</A
+></A
+><B
+CLASS="APPLICATION"
+>slonik</B
 ></H1
 ><DIV
-CLASS="SECT2"
+CLASS="REFNAMEDIV"
+><A
+NAME="AEN416"
+></A
 ><H2
-CLASS="SECT2"
+>Name</H2
+><B
+CLASS="APPLICATION"
+>slonik</B
+>&nbsp;--&nbsp;      <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> command processor
+    </DIV
+><DIV
+CLASS="REFSYNOPSISDIV"
 ><A
-NAME="AEN206"
->4.1. Introduction</A
-></H2
+NAME="AEN423"
+></A
+><H2
+>Synopsis</H2
 ><P
->Slonik is a command line utility designed specifically to setup
-and modify configurations of the Slony-I replication system.</P
+><TT
+CLASS="COMMAND"
+>slonik</TT
+> [<TT
+CLASS="REPLACEABLE"
+><I
+>filename</I
+></TT
+>
+  ]</P
 ></DIV
 ><DIV
-CLASS="SECT2"
+CLASS="REFSECT1"
+><A
+NAME="AEN428"
+></A
 ><H2
-CLASS="SECT2"
+>Description</H2
+><P
+>     <B
+CLASS="APPLICATION"
+>slonik</B
+> is the command processor
+     application that is used to set up and modify configurations of
+     <SPAN
+CLASS="PRODUCTNAME"
+>Slony-I</SPAN
+> replication clusters.
+    </P
+></DIV
+><DIV
+CLASS="REFSECT1"
 ><A
-NAME="AEN209"
->4.2. General outline</A
-></H2
+NAME="AEN433"
+></A
+><H2
+> Outline</H2
 ><P
->The slonik command line utility is supposed to be used embedded
-into shell scripts and reads commands from files or stdin.</P
+>The slonik command line utility is supposed to be used
+  embedded into shell scripts and reads commands from files or
+  stdin.</P
 ><P
 >It reads a set of Slonik statements, which are written in a
 scripting language with syntax similar to that of SQL, and performs
@@ -111,34 +152,38 @@
 script.</P
 ><P
 >Nearly all of the real configuration work is actually done by
-calling stored procedures after loading the Slony-I support base into
-a database.  Slonik was created because these stored procedures have
-special requirements as to on which particular node in the replication
-system they are called.  The absence of named parameters for stored
-procedures makes it rather hard to do this from the psql prompt, and
-psql lacks the ability to maintain multiple connections with open
-transactions to multiple databases.</P
-><P
->The format of the Slonik "language" is very similar to that of
-SQL, and the parser is based on a similar set of formatting rules for
-such things as numbers and strings.  Note that slonik is declarative,
-using literal values throughout, and does <SPAN
+  calling stored procedures after loading the Slony-I support base
+  into a database.  Slonik was created because these stored procedures
+  have special requirements as to on which particular node in the
+  replication system they are called.  The absence of named parameters
+  for stored procedures makes it rather hard to do this from the psql
+  prompt, and psql lacks the ability to maintain multiple connections
+  with open transactions to multiple databases.</P
+><P
+>The format of the Slonik <SPAN
+CLASS="QUOTE"
+>"language"</SPAN
+> is very similar to
+  that of SQL, and the parser is based on a similar set of formatting
+  rules for such things as numbers and strings.  Note that slonik is
+  declarative, using literal values throughout, and does
+  <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >not</I
 ></SPAN
-> have the
-notion of variables.  It is anticipated that Slonik scripts will
-typically be <SPAN
+> have the notion of variables.  It is
+  anticipated that Slonik scripts will typically be
+  <SPAN
 CLASS="emphasis"
 ><I
 CLASS="EMPHASIS"
 >generated</I
 ></SPAN
-> by scripts, such as Bash or Perl,
-and these sorts of scripting languages already have perfectly good
-ways of managing variables.</P
+> by scripts, such as Bash or Perl, and
+  these sorts of scripting languages already have perfectly good ways
+  of managing variables.</P
 ><P
 >A detailed list of Slonik commands can be found here: <A
 HREF="http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands"
@@ -146,6 +191,20 @@
 >slonik commands </A
 ></P
 ></DIV
+><DIV
+CLASS="REFSECT1"
+><A
+NAME="AEN444"
+></A
+><H2
+>Exit Status</H2
+><P
+>   <B
+CLASS="APPLICATION"
+>slonik</B
+> returns 0 to the shell if it
+   finished normally.  Scripts may specify return codes.   
+  </P
 ></DIV
 ><DIV
 CLASS="NAVFOOTER"
@@ -163,7 +222,7 @@
 ALIGN="left"
 VALIGN="top"
 ><A
-HREF="installation.html"
+HREF="slon.html"
 ACCESSKEY="P"
 >Prev</A
 ></TD
@@ -181,7 +240,7 @@
 ALIGN="right"
 VALIGN="top"
 ><A
-HREF="concepts.html"
+HREF="slonyadmin.html"
 ACCESSKEY="N"
 >Next</A
 ></TD
@@ -191,13 +250,16 @@
 WIDTH="33%"
 ALIGN="left"
 VALIGN="top"
->Slony-I Installation</TD
+><B
+CLASS="APPLICATION"
+>slon</B
+></TD
 ><TD
 WIDTH="34%"
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slony-commands.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
@@ -205,7 +267,7 @@
 WIDTH="33%"
 ALIGN="right"
 VALIGN="top"
->Slony-I Concepts</TD
+></TD
 ></TR
 ></TABLE
 ></DIV
Index: listenpaths.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/listenpaths.html -Ldoc/adminguide/listenpaths.html -u -w -r1.1 -r1.2
--- doc/adminguide/listenpaths.html
+++ doc/adminguide/listenpaths.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE="Doing switchover and failover with Slony-I"
 HREF="failover.html"><LINK
@@ -79,46 +79,93 @@
 CLASS="SECT1"
 ><A
 NAME="LISTENPATHS"
->16. Slony Listen Paths</A
+>8. Slony Listen Paths</A
 ></H1
 ><P
->If you have more than two or three nodes, and any degree of usage of cascaded subscribers (_e.g._ - subscribers that are subscribing through a subscriber node), you will have to be fairly careful about the configuration of "listen paths" via the Slonik STORE LISTEN and DROP LISTEN statements that control the contents of the table sl_listen.&#13;</P
-><P
->The "listener" entries in this table control where each node expects to listen in order to get events propagated from other nodes.  You might think that nodes only need to listen to the "parent" from whom they are getting updates, but in reality, they need to be able to receive messages from _all_ nodes in order to be able to conclude that SYNCs have been received everywhere, and that, therefore, entries in sl_log_1 and sl_log_2 have been applied everywhere, and can therefore be purged.&#13;</P
+>If you have more than two or three nodes, and any degree of
+usage of cascaded subscribers (_e.g._ - subscribers that are
+subscribing through a subscriber node), you will have to be fairly
+careful about the configuration of "listen paths" via the Slonik STORE
+LISTEN and DROP LISTEN statements that control the contents of the
+table sl_listen.&#13;</P
+><P
+>The "listener" entries in this table control where each node
+expects to listen in order to get events propagated from other nodes.
+You might think that nodes only need to listen to the "parent" from
+whom they are getting updates, but in reality, they need to be able to
+receive messages from _all_ nodes in order to be able to conclude that
+SYNCs have been received everywhere, and that, therefore, entries in
+sl_log_1 and sl_log_2 have been applied everywhere, and can therefore
+be purged.&#13;</P
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN666"
->16.1. How Listening Can Break</A
+NAME="AEN744"
+>8.1. How Listening Can Break</A
 ></H2
 ><P
->On one occasion, I had a need to drop a subscriber node (#2) and recreate it.  That node was the data provider for another subscriber (#3) that was, in effect, a "cascaded slave."  Dropping the subscriber node initially didn't work, as slonik informed me that there was a dependant node.  I repointed the dependant node to the "master" node for the subscription set, which, for a while, replicated without difficulties.&#13;</P
-><P
->I then dropped the subscription on "node 2," and started resubscribing it.  That raised the Slony-I SET_SUBSCRIPTION event, which started copying tables.  At that point in time, events stopped propagating to "node 3," and while it was in perfectly OK shape, no events were making it to it.&#13;</P
+>On one occasion, I had a need to drop a subscriber node (#2) and
+recreate it.  That node was the data provider for another subscriber
+(#3) that was, in effect, a "cascaded slave."  Dropping the subscriber
+node initially didn't work, as slonik informed me that there was a
+dependant node.  I repointed the dependant node to the "master" node
+for the subscription set, which, for a while, replicated without
+difficulties.&#13;</P
 ><P
->The problem was that node #3 was expecting to receive events from node #2, which was busy processing the SET_SUBSCRIPTION event, and was not passing anything else on.&#13;</P
+>I then dropped the subscription on "node 2," and started
+resubscribing it.  That raised the Slony-I <TT
+CLASS="COMMAND"
+>SET_SUBSCRIPTION</TT
+>
+event, which started copying tables.  At that point in time, events
+stopped propagating to "node 3," and while it was in perfectly OK
+shape, no events were making it to it.&#13;</P
 ><P
->We dropped the listener rules that caused node #3 to listen to node 2, replacing them with rules where it expected its events to come from node  #1 (the "master" provider node for the replication set).  At that moment, "as if by magic," node #3 started replicating again, as it discovered a place to get SYNC events.&#13;</P
+>The problem was that node #3 was expecting to receive events
+from node #2, which was busy processing the <TT
+CLASS="COMMAND"
+>SET_SUBSCRIPTION</TT
+> event,
+and was not passing anything else on.&#13;</P
+><P
+>We dropped the listener rules that caused node #3 to listen to
+node 2, replacing them with rules where it expected its events to come
+from node #1 (the "master" provider node for the replication set).  At
+that moment, "as if by magic," node #3 started replicating again, as
+it discovered a place to get <TT
+CLASS="COMMAND"
+>SYNC</TT
+> events.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN672"
->16.2. How The Listen Configuration Should Look</A
+NAME="AEN753"
+>8.2. How The Listen Configuration Should Look</A
 ></H2
 ><P
->The simple cases tend to be simple to cope with.  We'll look at a fairly complex set of nodes.&#13;</P
-><P
->Consider a set of nodes, 1 thru 6, where 1 is the "master," where 2-4 subscribe directly to the master, and where 5 subscribes to 2, and 6 subscribes to 5.&#13;</P
+>The simple cases tend to be simple to cope with.  We'll look at
+a fairly complex set of nodes.&#13;</P
 ><P
->Here is a "listener network" that indicates where each node should listen for messages coming from each other node:&#13;</P
+>Consider a set of nodes, 1 thru 6, where 1 is the "master,"
+where 2-4 subscribe directly to the master, and where 5 subscribes to
+2, and 6 subscribes to 5.&#13;</P
 ><P
-><TT
-CLASS="COMMAND"
+>Here is a "listener network" that indicates where each node
+should listen for messages coming from each other node:
+
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="SCREEN"
 >		 1|	2|	3|	4|	5|	6|
 --------------------------------------------
 	1	0	 2	 3	 4	 2	 2 
@@ -126,83 +173,85 @@
 	3	1	 1	 0	 1	 1	 1 
 	4	1	 1	 1	 0	 1	 1 
 	5	2	 2	 2	 2	 0	 6 
-	6	5	 5	 5	 5	 5	 0 </TT
+       6   5    5    5    5    5    0 </PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
->Row 2 indicates all of the listen rules for node 2; it gets events for nodes 1, 3, and 4 throw node 1, and gets events for nodes 5 and 6 from node 5.&#13;</P
-><P
->The row of 5's at the bottom, for node 6, indicate that node 6 listens to node 5 to get events from nodes 1-5.&#13;</P
+>Row 2 indicates all of the listen rules for node 2; it gets
+events for nodes 1, 3, and 4 throw node 1, and gets events for nodes 5
+and 6 from node 5.&#13;</P
 ><P
->The set of slonik SET LISTEN statements to express this "listener network" are as follows:&#13;</P
+>The row of 5's at the bottom, for node 6, indicate that node 6
+listens to node 5 to get events from nodes 1-5.&#13;</P
 ><P
-><TT
+>The set of slonik <TT
 CLASS="COMMAND"
->		store listen (origin = 1, receiver = 2, provider = 1);
+>SET LISTEN</TT
+> statements to express
+this <SPAN
+CLASS="QUOTE"
+>"listener network"</SPAN
+> are as follows:
 
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    store listen (origin = 1, receiver = 2, provider = 1);
 		store listen (origin = 1, receiver = 3, provider = 1);
-
 		store listen (origin = 1, receiver = 4, provider = 1);
-
 		store listen (origin = 1, receiver = 5, provider = 2);
-
 		store listen (origin = 1, receiver = 6, provider = 5);
-
 		store listen (origin = 2, receiver = 1, provider = 2);
-
 		store listen (origin = 2, receiver = 3, provider = 1);
-
 		store listen (origin = 2, receiver = 4, provider = 1);
-
 		store listen (origin = 2, receiver = 5, provider = 2);
-
 		store listen (origin = 2, receiver = 6, provider = 5);
-
 		store listen (origin = 3, receiver = 1, provider = 3);
-
 		store listen (origin = 3, receiver = 2, provider = 1);
-
 		store listen (origin = 3, receiver = 4, provider = 1);
-
 		store listen (origin = 3, receiver = 5, provider = 2);
-
 		store listen (origin = 3, receiver = 6, provider = 5);
-
 		store listen (origin = 4, receiver = 1, provider = 4);
-
 		store listen (origin = 4, receiver = 2, provider = 1);
-
 		store listen (origin = 4, receiver = 3, provider = 1);
-
 		store listen (origin = 4, receiver = 5, provider = 2);
-
 		store listen (origin = 4, receiver = 6, provider = 5);
-
 		store listen (origin = 5, receiver = 1, provider = 2);
-
 		store listen (origin = 5, receiver = 2, provider = 5);
-
 		store listen (origin = 5, receiver = 3, provider = 1);
-
 		store listen (origin = 5, receiver = 4, provider = 1);
-
 		store listen (origin = 5, receiver = 6, provider = 5);
-
 		store listen (origin = 6, receiver = 1, provider = 2);
-
 		store listen (origin = 6, receiver = 2, provider = 5);
-
 		store listen (origin = 6, receiver = 3, provider = 1);
-
 		store listen (origin = 6, receiver = 4, provider = 1);
-
-		store listen (origin = 6, receiver = 5, provider = 6);&#13;</TT
+    store listen (origin = 6, receiver = 5, provider = 6);</PRE
+></TD
+></TR
+></TABLE
 >&#13;</P
 ><P
 >How we read these listen statements is thus...&#13;</P
 ><P
->When on the "receiver" node, look to the "provider" node to provide events coming from the "origin" node.&#13;</P
+>When on the "receiver" node, look to the "provider" node to
+provide events coming from the "origin" node.&#13;</P
 ><P
->The tool "init_cluster.pl" in the "altperl" scripts produces optimized listener networks in both the tabular form shown above as well as in the form of Slonik statements.&#13;</P
+>The tool <TT
+CLASS="FILENAME"
+>init_cluster.pl</TT
+> in the <TT
+CLASS="FILENAME"
+>altperl</TT
+>
+scripts produces optimized listener networks in both the tabular form
+shown above as well as in the form of Slonik statements.&#13;</P
 ><P
 >There are three "thorns" in this set of roses:
 <P
@@ -210,27 +259,54 @@
 ><UL
 ><LI
 ><P
-> If you change the shape of the node set, so that the nodes subscribe differently to things, you need to drop sl_listen entries and create new ones to indicate the new preferred paths between nodes.  There is no automated way at this point to do this "reshaping."
-
-&#13;</P
+> If you change the shape of the node set, so that the
+nodes subscribe differently to things, you need to drop sl_listen
+entries and create new ones to indicate the new preferred paths
+between nodes.  There is no automated way at this point to do this
+"reshaping."&#13;</P
 ></LI
 ><LI
 ><P
-> If you _don't_ change the sl_listen entries, events will likely continue to propagate so long as all of the nodes continue to run well.  The problem will only be noticed when a node is taken down, "orphaning" any nodes that are listening through it.
-
-&#13;</P
+> If you <SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>don't</I
+></SPAN
+> change the sl_listen entries,
+events will likely continue to propagate so long as all of the nodes
+continue to run well.  The problem will only be noticed when a node is
+taken down, "orphaning" any nodes that are listening through it.&#13;</P
 ></LI
 ><LI
 ><P
-> You might have multiple replication sets that have _different_ shapes for their respective trees of subscribers.  There won't be a single "best" listener configuration in that case.
-
-
-
-	</P
+> You might have multiple replication sets that have
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>different</I
+></SPAN
+> shapes for their respective trees of subscribers.  There
+won't be a single "best" listener configuration in that case.&#13;</P
 ></LI
 ><LI
 ><P
-> In order for there to be an sl_listen path, there _must_ be a series of sl_path entries connecting the origin to the receiver.  This means that if the contents of sl_path do not express a "connected" network of nodes, then some nodes will not be reachable.  This would typically happen, in practice, when you have two sets of nodes, one in one subnet, and another in another subnet, where there are only a couple of "firewall" nodes that can talk between the subnets.  Cut out those nodes and the subnets stop communicating.&#13;</P
+> In order for there to be an sl_listen path, there
+<SPAN
+CLASS="emphasis"
+><I
+CLASS="EMPHASIS"
+>must</I
+></SPAN
+> be a series of sl_path entries connecting the origin
+to the receiver.  This means that if the contents of sl_path do not
+express a "connected" network of nodes, then some nodes will not be
+reachable.  This would typically happen, in practice, when you have
+two sets of nodes, one in one subnet, and another in another subnet,
+where there are only a couple of "firewall" nodes that can talk
+between the subnets.  Cut out those nodes and the subnets stop
+communicating.&#13;</P
 ></LI
 ></UL
 >&#13;</P
@@ -240,42 +316,86 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN697"
->16.3. Open Question</A
+NAME="AEN783"
+>8.3. Open Question</A
 ></H2
 ><P
->I am not certain what happens if you have multiple listen path entries for one path, that is, if you set up entries allowing a node to listen to multiple receivers to get events from a particular origin.  Further commentary on that would be appreciated!&#13;</P
+>I am not certain what happens if you have multiple listen path
+entries for one path, that is, if you set up entries allowing a node
+to listen to multiple receivers to get events from a particular
+origin.  Further commentary on that would be appreciated!
+
+<DIV
+CLASS="NOTE"
+><P
+></P
+><TABLE
+CLASS="NOTE"
+WIDTH="100%"
+BORDER="0"
+><TR
+><TD
+WIDTH="25"
+ALIGN="CENTER"
+VALIGN="TOP"
+><IMG
+SRC="./images/note.gif"
+HSPACE="5"
+ALT="Note"></TD
+><TD
+ALIGN="LEFT"
+VALIGN="TOP"
+><P
+> Actually, I do have answers to this; the remainder of
+this document should be re-presented based on the fact that Slony-I
+1.1 will include a "heuristic" to generate the listener paths
+automatically. </P
+></TD
+></TR
+></TABLE
+></DIV
+>&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN700"
->16.4. Generating listener entries via heuristics</A
+NAME="AEN788"
+>8.4. Generating listener entries via heuristics</A
 ></H2
 ><P
->It ought to be possible to generate sl_listen entries dynamically, based on the following heuristics.  Hopefully this will take place in version 1.1, eliminating the need to configure this by hand.&#13;</P
+>It ought to be possible to generate sl_listen entries
+dynamically, based on the following heuristics.  Hopefully this will
+take place in version 1.1, eliminating the need to configure this by
+hand.&#13;</P
 ><P
->Configuration will (tentatively) be controlled based on two data sources:
+>Configuration will (tentatively) be controlled based on two data
+sources:
+
 <P
 ></P
 ><UL
 ><LI
 ><P
-> sl_subscribe entries are the first, most vital control as to what listens to what; we know there must be a "listen" entry for a subscriber node to listen to its provider for events from the provider, and there should be direct "listening" taking place between subscriber and provider.
-
-&#13;</P
+> sl_subscribe entries are the first, most vital
+control as to what listens to what; we know there must be a "listen"
+entry for a subscriber node to listen to its provider for events from
+the provider, and there should be direct "listening" taking place
+between subscriber and provider.&#13;</P
 ></LI
 ><LI
 ><P
-> sl_path entries are the second indicator; if sl_subscribe has not already indicated "how to listen," then a node may listen directly to the event's origin if there is a suitable sl_path entry
-
-&#13;</P
+> sl_path entries are the second indicator; if
+sl_subscribe has not already indicated "how to listen," then a node
+may listen directly to the event's origin if there is a suitable
+sl_path entry&#13;</P
 ></LI
 ><LI
 ><P
-> If there is no guidance thus far based on the above data sources, then nodes can listen indirectly if there is an sl_path entry that points to a suitable sl_listen entry...&#13;</P
+> If there is no guidance thus far based on the above
+data sources, then nodes can listen indirectly if there is an sl_path
+entry that points to a suitable sl_listen entry...&#13;</P
 ></LI
 ></UL
 >&#13;</P
@@ -337,7 +457,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: firstdb.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.html,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/firstdb.html -Ldoc/adminguide/firstdb.html -u -w -r1.1 -r1.2
--- doc/adminguide/firstdb.html
+++ doc/adminguide/firstdb.html
@@ -12,7 +12,7 @@
 TITLE="Slony-I 1.1 Administration"
 HREF="slony.html"><LINK
 REL="UP"
-HREF="t24.html"><LINK
+HREF="slonyadmin.html"><LINK
 REL="PREVIOUS"
 TITLE="Database Schema Changes (DDL)"
 HREF="ddlchanges.html"><LINK
@@ -79,7 +79,7 @@
 CLASS="SECT1"
 ><A
 NAME="FIRSTDB"
->20. Replicating Your First Database</A
+>12. Replicating Your First Database</A
 ></H1
 ><P
 >In this example, we will be replicating a brand new pgbench database.  The
@@ -259,8 +259,8 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN867"
->20.1. Creating the pgbenchuser</A
+NAME="AEN953"
+>12.1. Creating the pgbenchuser</A
 ></H2
 ><P
 ><TT
@@ -273,422 +273,351 @@
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN871"
->20.2. Preparing the databases</A
+NAME="AEN957"
+>12.2. Preparing the databases</A
 ></H2
-><P
-><TT
-CLASS="COMMAND"
+><TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
 >createdb -O $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
 createdb -O $PGBENCHUSER -h $SLAVEHOST $SLAVEDBNAME
-pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME</TT
->&#13;</P
+    pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME</PRE
+></TD
+></TR
+></TABLE
 ><P
 >Because Slony-I depends on the databases having the pl/pgSQL procedural
 language installed, we better install it now.  It is possible that you have
 installed pl/pgSQL into the template1 database in which case you can skip this
 step because it's already installed into the $MASTERDBNAME.
 
-<TT
-CLASS="COMMAND"
->&#13;createlang plpgsql -h $MASTERHOST $MASTERDBNAME&#13;</TT
->
-&#13;</P
-><P
->Slony-I does not yet automatically copy table definitions from a master when a
-
-slave subscribes to it, so we need to import this data.  We do this with
-
-pg_dump.
-
-&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->&#13;pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME&#13;</TT
->
-
-&#13;</P
-><P
->To illustrate how Slony-I allows for on the fly replication subscription, lets
-
-start up pgbench.  If you run the pgbench application in the foreground of a
-
-separate terminal window, you can stop and restart it with different
-
-parameters at any time.  You'll need to re-export the variables again so they
-
-are available in this session as well.
-
-&#13;</P
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    createlang plpgsql -h $MASTERHOST $MASTERDBNAME</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
->The typical command to run pgbench would look like:
+>Slony-I does not yet automatically copy table definitions from a
+master when a slave subscribes to it, so we need to import this data.
+We do this with <B
+CLASS="APPLICATION"
+>pg_dump</B
+>.
 
-&#13;</P
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
-><TT
-CLASS="COMMAND"
->&#13;pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME&#13;</TT
->
+>To illustrate how Slony-I allows for on the fly replication
+subscription, let's start up <B
+CLASS="APPLICATION"
+>pgbench</B
+>.  If you run the
+<B
+CLASS="APPLICATION"
+>pgbench</B
+> application in the foreground of a separate
+terminal window, you can stop and restart it with different parameters
+at any time.  You'll need to re-export the variables again so they are
+available in this session as well.&#13;</P
+><P
+>The typical command to run <B
+CLASS="APPLICATION"
+>pgbench</B
+> would look like:
 
-&#13;</P
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
->This will run pgbench with 5 concurrent clients each processing 1000
-
-transactions against the pgbench database running on localhost as the pgbench
-
-user.
-
-&#13;</P
+>This will run <B
+CLASS="APPLICATION"
+>pgbench</B
+> with 5 concurrent clients
+each processing 1000 transactions against the pgbench database running
+on localhost as the pgbench user.&#13;</P
 ></DIV
 ><DIV
 CLASS="SECT2"
 ><H2
 CLASS="SECT2"
 ><A
-NAME="AEN885"
->20.3. Configuring the Database for Replication.</A
+NAME="AEN973"
+>12.3. Configuring the Database for Replication.</A
 ></H2
 ><P
->Creating the configuration tables, stored procedures, triggers and
-
-configuration is all done through the slonik tool.  It is a specialized
-
-scripting aid that mostly calls stored procedures in the master/salve (node)
-
-databases.  The script to create the initial configuration for the simple
-
-master-slave setup of our pgbench database looks like this:
-
-&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->&#13;#!/bin/sh
-
+>Creating the configuration tables, stored procedures, triggers
+and configuration is all done through the slonik tool.  It is a
+specialized scripting aid that mostly calls stored procedures in the
+master/slave (node) databases.  The script to create the initial
+configuration for the simple master-slave setup of our pgbench
+database looks like this:
 
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    #!/bin/sh
 
 slonik &#60;&#60;_EOF_
-
 	#--
-
 	 # define the namespace the replication system uses in our example it is
-
 	 # slony_example
-
 	#--
-
 	cluster name = $CLUSTERNAME;
 
-
-
 	#--
-
 	 # admin conninfo's are used by slonik to connect to the nodes one for each
-
 	 # node on each side of the cluster, the syntax is that of PQconnectdb in
-
 	 # the C-API
-
 	# --
-
 	node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
-
 	node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
 
-
-
 	#--
-
 	 # init the first node.  Its id MUST be 1.  This creates the schema
-
 	 # _$CLUSTERNAME containing all replication system specific database
-
 	 # objects.
 
-
-
 	#--
-
 	init cluster ( id=1, comment = 'Master Node');
 
- 
-
 	#--
-
 	 # Because the history table does not have a primary key or other unique
-
 	 # constraint that could be used to identify a row, we need to add one.
-
 	 # The following command adds a bigint column named
-
 	 # _Slony-I_$CLUSTERNAME_rowID to the table.  It will have a default value
-
 	 # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT NULL
-
 	 # constraints applied.  All existing rows will be initialized with a
-
 	 # number
-
 	#--
-
 	table add key (node id = 1, fully qualified name = 'public.history');
 
-
-
 	#--
-
 	 # Slony-I organizes tables into sets.  The smallest unit a node can
-
 	 # subscribe is a set.  The following commands create one set containing
-
 	 # all 4 pgbench tables.  The master or origin of the set is node 1.
-
 	#--
-
 	create set (id=1, origin=1, comment='All pgbench tables');
-
 	set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');
-
 	set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');
-
 	set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');
-
 	set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);
 
-
-
 	#--
-
 	 # Create the second node (the slave) tell the 2 nodes how to connect to
-
 	 # each other and how they should listen for events.
-
 	#--
 
-
-
 	store node (id=2, comment = 'Slave node');
-
 	store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
-
 	store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER');
-
 	store listen (origin=1, provider = 1, receiver =2);
-
 	store listen (origin=2, provider = 2, receiver =1);
-
-_EOF_&#13;</TT
->
-
-
-
-&#13;</P
+    _EOF_</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
->Is the pgbench still running?  If not start it again.
-
-&#13;</P
+>Is the pgbench still running?  If not start it again.&#13;</P
 ><P
->At this point we have 2 databases that are fully prepared.  One is the master
-
-database in which bgbench is busy accessing and changing rows.  It's now time
-
-to start the replication daemons.
-
-&#13;</P
+>At this point we have 2 databases that are fully prepared.  One
+is the master database in which bgbench is busy accessing and changing
+rows.  It's now time to start the replication daemons.&#13;</P
 ><P
 >On $MASTERHOST the command to start the replication engine is
 
-&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->&#13;slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST"&#13;</TT
->
-&#13;</P
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST"</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
 >Likewise we start the replication system on node 2 (the slave)
-&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->&#13;slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"&#13;</TT
->
-&#13;</P
-><P
->Even though we have the slon running on both the master and slave and they are
 
-both spitting out diagnostics and other messages, we aren't replicating any
-
-data yet.  The notices you are seeing is the synchronization of cluster
-
-configurations between the 2 slon processes.
-&#13;</P
-><P
->To start replicating the 4 pgbench tables (set 1) from the master (node id 1)
-
-the the slave (node id 2), execute the following script.
-
-&#13;</P
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
-><TT
-CLASS="COMMAND"
->&#13;#!/bin/sh
+>Even though we have the <B
+CLASS="APPLICATION"
+>slon</B
+> running on both the
+master and slave, and they are both spitting out diagnostics and other
+messages, we aren't replicating any data yet.  The notices you are
+seeing is the synchronization of cluster configurations between the 2
+<B
+CLASS="APPLICATION"
+>slon</B
+> processes.&#13;</P
+><P
+>To start replicating the 4 pgbench tables (set 1) from the
+master (node id 1) the the slave (node id 2), execute the following
+script.
 
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    #!/bin/sh
 slonik &#60;&#60;_EOF_
-
 	 # ----
-
 	 # This defines which namespace the replication system uses
-
 	 # ----
-
 	 cluster name = $CLUSTERNAME;
 
-
-
 	 # ----
-
 	 # Admin conninfo's are used by the slonik program to connect
-
 	 # to the node databases.  So these are the PQconnectdb arguments
-
 	 # that connect from the administrators workstation (where
-
 	 # slonik is executed).
-
 	 # ----
-
 	 node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER';
-
 	 node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER';
 
-
-
 	 # ----
-
 	 # Node 2 subscribes set 1
-
 	 # ----
-
 	 subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
-
-_EOF_&#13;</TT
->
-&#13;</P
-><P
->Any second here, the replication daemon on $SLAVEHOST will start to copy the
-
-current content of all 4 replicated tables.  While doing so, of course, the
-
-pgbench application will continue to modify the database.  When the copy
-
-process is finished, the replication daemon on $SLAVEHOST will start to catch
-
-up by applying the accumulated replication log.  It will do this in little
-
-steps, 10 seconds worth of application work at a time.  Depending on the
-
-performance of the two systems involved, the sizing of the two databases, the
-
-actual transaction load and how well the two databases are tuned and
-
-maintained, this catchup process can be a matter of minutes, hours, or
-
-eons.
-
-&#13;</P
+    _EOF_</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
->You have now successfully set up your first basic master/slave replication
-
-system, and the 2 databases once the slave has caught up contain identical
-
-data.  That's the theory.  In practice, it's good to check that the datasets
-
-are in fact the same.
-
-&#13;</P
+>Any second now, the replication daemon on $SLAVEHOST will start
+to copy the current content of all 4 replicated tables.  While doing
+so, of course, the pgbench application will continue to modify the
+database.  When the copy process is finished, the replication daemon
+on <CODE
+CLASS="ENVAR"
+>$SLAVEHOST</CODE
+> will start to catch up by applying the
+accumulated replication log.  It will do this in little steps, 10
+seconds worth of application work at a time.  Depending on the
+performance of the two systems involved, the sizing of the two
+databases, the actual transaction load and how well the two databases
+are tuned and maintained, this catchup process can be a matter of
+minutes, hours, or eons.&#13;</P
+><P
+>You have now successfully set up your first basic master/slave
+replication system, and the 2 databases should, once the slave has
+caught up, contain identical data.  That's the theory, at least.  In
+practice, it's good to build confidence by verifying that the datasets
+are in fact the same.&#13;</P
 ><P
 >The following script will create ordered dumps of the 2 databases and compare
-
 them.  Make sure that pgbench has completed it's testing, and that your slon
-
 sessions have caught up.
 
-&#13;</P
-><P
-><TT
-CLASS="COMMAND"
->&#13;#!/bin/sh
-
+<TABLE
+BORDER="0"
+BGCOLOR="#E0E0E0"
+WIDTH="100%"
+><TR
+><TD
+><PRE
+CLASS="PROGRAMLISTING"
+>    #!/bin/sh
 echo -n "**** comparing sample1 ... "
-
 psql -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME &#62;dump.tmp.1.$$ &#60;&#60;_EOF_
-
 	 select 'accounts:'::text, aid, bid, abalance, filler
-
 		  from accounts order by aid;
-
 	 select 'branches:'::text, bid, bbalance, filler
-
 		  from branches order by bid;
-
 	 select 'tellers:'::text, tid, bid, tbalance, filler
-
 		  from tellers order by tid;
-
 	 select 'history:'::text, tid, bid, aid, delta, mtime, filler,
-
 		  "_Slony-I_${CLUSTERNAME}_rowID"
-
 		  from history order by "_Slony-I_${CLUSTERNAME}_rowID";
-
 _EOF_
-
 psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME &#62;dump.tmp.2.$$ &#60;&#60;_EOF_
-
 	 select 'accounts:'::text, aid, bid, abalance, filler
-
 		  from accounts order by aid;
-
 	 select 'branches:'::text, bid, bbalance, filler
-
 		  from branches order by bid;
-
 	 select 'tellers:'::text, tid, bid, tbalance, filler
-
 		  from tellers order by tid;
-
 	 select 'history:'::text, tid, bid, aid, delta, mtime, filler,
-
 		  "_Slony-I_${CLUSTERNAME}_rowID"
-
 		  from history order by "_Slony-I_${CLUSTERNAME}_rowID";
-
 _EOF_
 
-
-
 if diff dump.tmp.1.$$ dump.tmp.2.$$ &#62;$CLUSTERNAME.diff ; then
-
 	 echo "success - databases are equal."
-
 	 rm dump.tmp.?.$$
-
 	 rm $CLUSTERNAME.diff
-
 else
-
 	 echo "FAILED - see $CLUSTERNAME.diff for database differences"
-
-fi&#13;</TT
->
-
-&#13;</P
+    fi</PRE
+></TD
+></TR
+></TABLE
+>&#13;</P
 ><P
->Note that there is somewhat more sophisticated documentation of the process in the Slony-I source code tree in a file called slony-I-basic-mstr-slv.txt.
-
-&#13;</P
+>Note that there is somewhat more sophisticated documentation of
+the process in the Slony-I source code tree in a file called
+slony-I-basic-mstr-slv.txt.&#13;</P
 ><P
 >If this script returns "FAILED" please contact the developers at
 <A
@@ -753,7 +682,7 @@
 ALIGN="center"
 VALIGN="top"
 ><A
-HREF="t24.html"
+HREF="slonyadmin.html"
 ACCESSKEY="U"
 >Up</A
 ></TD
Index: slony.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/slony.sgml
+++ doc/adminguide/slony.sgml
@@ -19,18 +19,22 @@
   <author> <firstname>Christopher</firstname> <surname>Browne</surname> </author>
   &legal;
  </bookinfo>
-<article> <title>Slony-I Administration</title>
+<article id="slonyintro"> <title>Slony-I Introduction</title>
 
  &intro;
  &prerequisites;
  &installation;
- &slonik;
  &concepts;
  &cluster;
  &defineset;
+</article>
+
+&reference;
+
+<article id="slonyadmin"> <title> Slony-I Administration </title>
+
  &adminscripts;
  &startslons;
- &slonconfig;
  &subscribenodes;
  &monitoring;
  &maintenance;
@@ -45,11 +49,19 @@
 
 </article>
 
-<article id="faq"><title> FAQ </title>
+<article id="faq"><title> Slony-I FAQ </title>
 
-<para> Not all of these are, strictly speaking, <quote/frequently
-asked;/ some represent <emphasis/trouble found that seemed worth
-documenting/.
+    <articleinfo><title>Slony-I FAQ</title>
+      <corpauthor>The Slony Global Development Group</corpauthor>
+      <author> 
+	<firstname>Christopher </firstname> 
+	<surname>Browne</surname> 
+      </author> 
+    </articleinfo>
+
+<para> Not all of these are, strictly speaking, <quote>frequently
+asked;</quote> some represent <emphasis>trouble found that seemed
+worth documenting</emphasis>.</para>
 
  &faq;
 


More information about the Slony1-commit mailing list