CVS User Account cvsuser
Fri Dec 9 22:22:42 PST 2005
Log Message:
-----------
Draw in latest docs from 1.2 branch

Tags:
----
REL_1_1_STABLE

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        adminscripts.sgml (r1.24 -> r1.24.2.1)
        bestpractices.sgml (r1.9.2.1 -> r1.9.2.2)
        concepts.sgml (r1.15 -> r1.15.2.1)
        ddlchanges.sgml (r1.15 -> r1.15.2.1)
        defineset.sgml (r1.18 -> r1.18.2.1)
        failover.sgml (r1.15 -> r1.15.2.1)
        faq.sgml (r1.40.2.2 -> r1.40.2.3)
        filelist.sgml (r1.13.2.1 -> r1.13.2.2)
        firstdb.sgml (r1.13 -> r1.13.2.1)
        installation.sgml (r1.13.2.1 -> r1.13.2.2)
        intro.sgml (r1.17 -> r1.17.2.1)
        legal.sgml (r1.6 -> r1.6.2.1)
        locking.sgml (r1.2 -> r1.2.2.1)
        logshipping.sgml (r1.9.2.1 -> r1.9.2.2)
        plainpaths.sgml (r1.8 -> r1.8.2.1)
        prerequisites.sgml (r1.18 -> r1.18.2.1)
        reshape.sgml (r1.14 -> r1.14.2.1)
        slon.sgml (r1.16 -> r1.16.2.1)
        slonconf.sgml (r1.8 -> r1.8.2.1)
        slonik.sgml (r1.13 -> r1.13.2.1)
        slonik_ref.sgml (r1.27.2.1 -> r1.27.2.2)
        slony.sgml (r1.20.2.1 -> r1.20.2.2)
        startslons.sgml (r1.11 -> r1.11.2.1)
        subscribenodes.sgml (r1.12 -> r1.12.2.1)
        supportedplatforms.sgml (r1.2.2.1 -> r1.2.2.2)
        testbed.sgml (r1.3.2.1 -> r1.3.2.2)
        usingslonik.sgml (r1.11 -> r1.11.2.1)
        versionupgrade.sgml (r1.5 -> r1.5.2.1)

-------------- next part --------------
Index: versionupgrade.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/versionupgrade.sgml,v
retrieving revision 1.5
retrieving revision 1.5.2.1
diff -Ldoc/adminguide/versionupgrade.sgml -Ldoc/adminguide/versionupgrade.sgml -u -w -r1.5 -r1.5.2.1
--- doc/adminguide/versionupgrade.sgml
+++ doc/adminguide/versionupgrade.sgml
@@ -1,7 +1,8 @@
 <!-- $Id$ -->
 <sect1 id="versionupgrade"><title>Using &slony1; for &postgres; Upgrades</title>
 
-<indexterm><primary>&slony1; for &postgres; version upgrades</primary></indexterm>
+<indexterm><primary>version upgrades for &postgres; using
+&slony1;</primary></indexterm>
 
 <para> A number of people have found
 &slony1; useful for helping perform upgrades
Index: slon.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slon.sgml,v
retrieving revision 1.16
retrieving revision 1.16.2.1
diff -Ldoc/adminguide/slon.sgml -Ldoc/adminguide/slon.sgml -u -w -r1.16 -r1.16.2.1
--- doc/adminguide/slon.sgml
+++ doc/adminguide/slon.sgml
@@ -279,8 +279,8 @@
       File from which to read <application>slon</application> configuration.
      </para>
 
-     <para> This configuration is discussed further in <xref
-     linkend="runtime-config">.  If there are to be a complex set of
+     <para> This configuration is  discussed  further  in <link 
+     linkend="runtime-config">Slon  Run-time Configuration</link>. If there are to be a complex set of
      configuration parameters, or if there are parameters you do not
      wish to be visible in the process environment variables (such as
      passwords), it may be convenient to draw many or all parameters
@@ -298,17 +298,62 @@
      <para>
       <envar>archive_dir</envar> indicates a directory in which to
       place a sequence of <command>SYNC</command> archive files for
-      use in <link linkend="logshipping"> log shipping</link> mode.
+      use in &logship; mode.
      </para>
     </listitem>
    </varlistentry>
+
+
+   <varlistentry>
+    <term><option>-q</option><replaceable class="parameter"> quit based on SYNC provider </replaceable></term>
+    <listitem>
+     <para>
+      <envar>quit_sync_provider</envar> indicates which provider's
+      worker thread should be watched in order to terminate after a
+      certain event.  This must be used in conjunction with the
+      <option>-r</option> option below...
+     </para>
+
+     <para> This allows you to have a <application>slon</application>
+     stop replicating after a certain point. </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
+    <term><option>-r</option><replaceable class="parameter"> quit at event number </replaceable></term>
+    <listitem>
+     <para>
+      <envar>quit_sync_finalsync</envar> indicates the event number
+      after which the remote worker thread for the provider above
+      should terminate.  This must be used in conjunction with the
+      <option>-q</option> option above...
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
+    <term><option>-l</option><replaceable class="parameter"> lag interval </replaceable></term>
+    <listitem>
+     <para>
+      <envar>lag_interval</envar> indicates an interval value such as
+      <command> 3 minutes </command> or <command> 4 hours </command>
+      or <command> 2 days </command> that indicates that this node is
+      to lag its providers by the specified interval of time.  This
+      causes events to be ignored until they reach the age
+      corresponding to the interval.
+     </para>
+    </listitem>
+   </varlistentry>
+
   </variablelist>
  </refsect1>
  <refsect1>
   <title>Exit Status</title>
   <para>
    <application>slon</application> returns 0 to the shell if it
-   finished normally.  It returns -1 if it encounters any fatal error.
+   finished normally.  It returns via <function>exit(-1)</function>
+   (which will likely provide a return value of either 127 or 255,
+   depending on your system) if it encounters any fatal error.
   </para>
  </refsect1>
 </refentry>
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.18
retrieving revision 1.18.2.1
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.18 -r1.18.2.1
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -5,7 +5,7 @@
 <para>Defining the nodes indicated the shape of the cluster of
 database servers; it is now time to determine what data is to be
 copied between them.  The groups of data that are copied are defined
-as <quote>sets.</quote></para>
+as <quote>replication sets.</quote></para>
 
 <para>A replication set consists of the following:</para>
 <itemizedlist>
@@ -30,12 +30,12 @@
 
 <listitem><para> If the table has a formally identified primary key,
 <xref linkend="stmtsetaddtable"> can be used without any need to
-reference the primary key.  &slony1; will pick up that there is a
-primary key, and use it.</para></listitem>
+reference the primary key.  &slony1; can automatically pick up that
+there is a primary key, and use it.</para></listitem>
 
 <listitem><para> If the table hasn't got a primary key, but has some
 <emphasis>candidate</emphasis> primary key, that is, some index on a
-combination of fields that is UNIQUE and NOT NULL, then you can
+combination of fields that is both UNIQUE and NOT NULL, then you can
 specify the key, as in</para>
 
 <programlisting>
@@ -63,10 +63,10 @@
 key;</quote> it is, however, recommended that you have one of those
 instead of having &slony1; populate the PK column for you.  If you
 don't have a suitable primary key, that means that the table hasn't
-got any mechanism from your application's standpoint of keeping values
-unique.  &slony1; may therefore introduce a new failure mode for your
-application, and this implies that you had a way to enter confusing
-data into the database.</para>
+got any mechanism from your application's standpoint for keeping
+values unique.  &slony1; may therefore introduce a new failure mode
+for your application, and this also implies that you had a way to
+enter confusing data into the database.</para>
 </sect2>
 
 <sect2 id="definesets"><title>Grouping tables into sets</title>
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.24
retrieving revision 1.24.2.1
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.24 -r1.24.2.1
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -16,8 +16,9 @@
 <quote>foot gun</quote>, as minor typos on the command line led, on a
 couple of occasions, to pretty calamitous actions, so the behavior has
 been changed so that the scripts simply submit output to standard
-output.  An administrator should review the script
-<emphasis>before</emphasis> submitting it to <xref linkend="slonik">.</para>
+output.  The savvy administrator should review the script
+<emphasis>before</emphasis> submitting it to <xref
+linkend="slonik">.</para>
 
 <sect2><title>Node/Cluster Configuration - cluster.nodes</title>
 
@@ -31,6 +32,17 @@
 <listitem><para><envar>$CLUSTER_NAME</envar>=orglogs;	# What is the name of the replication cluster?</para></listitem>
 <listitem><para><envar>$LOGDIR</envar>='/opt/OXRS/log/LOGDBS';	# What is the base directory for logs?</para></listitem>
 <listitem><para><envar>$APACHE_ROTATOR</envar>="/opt/twcsds004/OXRS/apache/rotatelogs";  # If set, where to find Apache log rotator</para></listitem>
+<listitem><para><envar>foldCase</envar> # If set to 1, object names (including schema names) will be
+folded to lower case.  By default, your object names will be left
+alone.  Note that &postgres; itself folds object names to lower case;
+if you create a table via the command <command> CREATE TABLE
+SOME_THING (Id INTEGER, STudlYName text);</command>, the result will
+be that all of those components are forced to lower case, thus
+equivalent to <command> create table some_thing (id integer,
+studlyname text);</command>, and the name of table and, in this case,
+the fields will all, in fact, be lower case. </para>
+
+</listitem>
 </itemizedlist>
 </para>
 
@@ -54,7 +66,9 @@
 		password => undef,	# password for user
 		parent => 1,		# which node is parent to this node
 		noforward => undef	# shall this node be set up to forward results?
-                sslmode => undef        # SSL mode argument - determine priority of SSL usage = disable,allow,prefer,require
+                sslmode => undef        # SSL mode argument - determine 
+                                        # priority of SSL usage
+                                        # = disable,allow,prefer,require
 );
 </programlisting>
 </sect2>
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.27.2.1
retrieving revision 1.27.2.2
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.27.2.1 -r1.27.2.2
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -97,7 +97,8 @@
    </para>
   </partintro>
   <!-- **************************************** -->
-  <refentry id ="stmtinclude"><refmeta><refentrytitle>INCLUDE</refentrytitle> </refmeta>
+  <refentry id ="stmtinclude"><refmeta><refentrytitle>INCLUDE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>INCLUDE</refname>
     
@@ -130,7 +131,8 @@
    </refsect1>
   </refentry>
   <!-- **************************************** -->
-  <refentry id ="stmtdefine"><refmeta><refentrytitle>DEFINE</refentrytitle> </refmeta>
+  <refentry id ="stmtdefine"><refmeta><refentrytitle>DEFINE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DEFINE</refname>
     
@@ -177,8 +179,10 @@
 
 create set ( @sakaiMovies, comment = 'movies' );
 
-set add table( set @sakaiMovies, id = 1, @fqn = 'public.customers', comment = 'sakai customers' );
-set add table( set @sakaiMovies, id = 2, @fqn = 'public.tapes',     comment = 'sakai tapes' );
+set add table( set @sakaiMovies, id = 1, @fqn = 'public.customers', 
+               comment = 'sakai customers' );
+set add table( set @sakaiMovies, id = 2, @fqn = 'public.tapes',     
+               comment = 'sakai tapes' );
 echo 'But @sakaiMovies will display as a string, and is not expanded';
     </programlisting>
    </refsect1>
@@ -201,7 +205,8 @@
   </partintro>
   <!-- **************************************** -->
   
-  <refentry id ="clustername"><refmeta><refentrytitle>CLUSTER NAME</refentrytitle> </refmeta>
+  <refentry id ="clustername"><refmeta><refentrytitle>CLUSTER NAME</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>CLUSTER NAME</refname>
     
@@ -246,11 +251,11 @@
   
 <!-- **************************************** -->
 
-  <refentry id ="admconninfo"><refmeta><refentrytitle>ADMIN CONNINFO</refentrytitle> </refmeta>
+  <refentry id ="admconninfo"><refmeta><refentrytitle>ADMIN CONNINFO</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>ADMIN CONNINFO</refname>
-    
-    <refpurpose> preamble - identifying <productname>PostgreSQL</productname> database </refpurpose>
+    <refpurpose> preamble - identifying &postgres; database </refpurpose>
    </refnamediv>
    <refsynopsisdiv>
     <cmdsynopsis>
@@ -269,7 +274,7 @@
      function. The user used to connect must be the special
      replication superuser, as some of the actions performed later may
      include operations that are strictly reserved for database
-     superusers by PostgreSQL.
+     superusers by &postgres;.
     </para>
 
     <para>
@@ -302,7 +307,7 @@
    </note>
 
    <para> For more details on the distinction between this and <xref
-   linkend="stmtstorepath">, see <xref linkend="plainpaths">.</para>
+   linkend="stmtstorepath">, see &rplainpaths;.</para>
 
    </Refsect1>
    <Refsect1><Title>Example</Title>
@@ -319,7 +324,8 @@
   <title>Configuration and Action commmands</title>
 <!-- **************************************** -->
   
-  <refentry id ="stmtecho"><refmeta><refentrytitle>ECHO</refentrytitle> </refmeta>
+  <refentry id ="stmtecho"><refmeta><refentrytitle>ECHO</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>ECHO</refname>
     
@@ -346,7 +352,8 @@
   
   <!-- **************************************** -->
   
-  <refentry id ="stmtexit"><refmeta><refentrytitle>EXIT</refentrytitle> </refmeta>
+  <refentry id ="stmtexit"><refmeta><refentrytitle>EXIT</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>EXIT</refname>
     
@@ -374,9 +381,11 @@
    </Refsect1>
   </Refentry>
 
+  <!-- **************************************** -->
   <refentry id="stmtinitcluster">
    <refmeta>
     <refentrytitle>INIT CLUSTER</refentrytitle>
+     <manvolnum>7</manvolnum>
    </refmeta>
    <refnamediv>
     <refname>INIT CLUSTER</refname>
@@ -391,13 +400,11 @@
    <refsect1>
     <title>Description</title> 
 
-    <para> Initialize the first node in a new
-    &slony1; replication cluster.  The
-    initialization process consists of creating the cluster namespace,
-    loading all the base tables, functions, procedures and
-    initializing the node, using <xref
-    linkend="function.initializelocalnode-integer-text"> and <xref
-    linkend= "function.enablenode-integer">.
+    <para> Initialize the first node in a new &slony1; replication
+    cluster.  The initialization process consists of creating the
+    cluster namespace, loading all the base tables, functions,
+    procedures and initializing the node, using
+    &funinitializelocalnode; and &funenablenode;.
      
      <variablelist>
       <varlistentry><term><literal>ID</literal></term>
@@ -406,7 +413,7 @@
       
       <varlistentry><term><literal>COMMENT = 'comment
       text'</literal></term> <listitem><para> A descriptive text added
-      to the node entry in the table <xref linkend="table.sl-node">.
+      to the node entry in the table &slnode;. 
       </para></listitem>
       </varlistentry>
      </variablelist>
@@ -419,7 +426,7 @@
     <application>slonik</application> utility), while on the system
     where the node database is running the shared objects of the
     &slony1; system must be installed in the
-    PostgreSQL library directory. Also the procedural language
+    &postgres; library directory. Also the procedural language
     PL/pgSQL is assumed to already be installed in the target
     database.</para>
    </refsect1>
@@ -443,7 +450,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id ="stmtstorenode"><refmeta><refentrytitle>STORE NODE</refentrytitle> </refmeta>
+  <refentry id ="stmtstorenode"><refmeta><refentrytitle>STORE NODE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>STORE NODE</refname>
     <refpurpose> Initialize &slony1; node </refpurpose>
@@ -472,15 +480,16 @@
       </varlistentry>
       
       <varlistentry><term><literal> COMMENT = 'description' </literal></term>
-       <listitem><para> A descriptive text added to the node entry in the table <xref linkend="table.sl-node"></para></listitem>
+       <listitem><para> A descriptive text added to the node entry in the table &slnode;</para></listitem>
       </varlistentry>
       
       <varlistentry><term><literal> SPOOLNODE = boolean </literal></term>
        
        <listitem><para>Specifies that the new node is a virtual spool
-	 node for file archiving of replication log. If true
-	 <application>slonik</application> will not attempt to initialize a database
-	 with the replication schema.</para></listitem>
+       node for file archiving of replication log.  If true,
+       <application>slonik</application> will not attempt to
+       initialize a database with the replication
+       schema.</para></listitem>
        
       </varlistentry>
       <varlistentry><term><literal> EVENT NODE = ival </literal></term>
@@ -492,9 +501,7 @@
      </variablelist>
     </para>
 
-    <para> This uses <xref linkend=
-    "function.initializelocalnode-integer-text"> and <xref linkend=
-    "function.enablenode-integer">. </para>
+    <para> This uses &funinitializelocalnode; and &funenablenode;. </para>
     
    </Refsect1>
    <Refsect1><Title>Example</Title>
@@ -505,7 +512,8 @@
   </Refentry>
   
 <!-- **************************************** -->
-  <refentry id="stmtdropnode"><refmeta><refentrytitle>DROP NODE</refentrytitle> </refmeta>
+  <refentry id="stmtdropnode"><refmeta><refentrytitle>DROP NODE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP NODE</refname>
     
@@ -535,8 +543,7 @@
      </variablelist>
     </para>
 
-    <para> This uses <xref linkend=
-    "function.dropnode-integer">. </para>
+    <para> This uses &fundropnode;. </para>
 
     <para> When you invoke <command>DROP NODE</command>, one of the
     steps is to run <command>UNINSTALL NODE</command>.</para>
@@ -545,8 +552,7 @@
    (this is particularly common for Java application frameworks with
    connection pools), the connections may cache query plans that
    include the pre-<command>DROP NODE</command> state of things, and
-   you will get <link linkend="missingoids"> error messages indicating
-   missing OIDs</link>.</para>
+   you will get &rmissingoids;.</para>
 
    <para>After dropping a node, you may also need to recycle
    connections in your application.</para></warning>
@@ -560,7 +566,8 @@
   </refentry>
 
 <!-- **************************************** -->
-  <refentry id="stmtuninstallnode"><refmeta><refentrytitle>UNINSTALL NODE</refentrytitle> </refmeta>
+  <refentry id="stmtuninstallnode"><refmeta><refentrytitle>UNINSTALL NODE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>UNINSTALL NODE</refname>
     
@@ -587,7 +594,7 @@
      </variablelist>
     </para>
 
-    <para> This uses <xref linkend= "function.uninstallnode">. </para>
+    <para> This uses &fununinstallnode;. </para>
 
     <para> The difference between <command>UNINSTALL NODE</command>
     and <command>DROP NODE</command> is that all <command>UNINSTALL
@@ -598,8 +605,7 @@
    (this is particularly common for Java application frameworks with
    connection pools), the connections may cache query plans that
    include the pre-<command>UNINSTALL NODE</command> state of things,
-   and you will get <link linkend="missingoids"> error messages
-   indicating missing OIDs</link>.</para>
+   and you will get &rmissingoids;.</para>
 
    <para>After dropping a node, you may also need to recycle
    connections in your application.</para></warning>
@@ -614,7 +620,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtrestartnode"><refmeta><refentrytitle>RESTART NODE</refentrytitle> </refmeta>
+  <refentry id="stmtrestartnode"><refmeta><refentrytitle>RESTART NODE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>RESTART NODE</refname>
 
@@ -654,7 +661,7 @@
   <!-- **************************************** -->
 
   <refentry id="stmtstorepath"><refmeta><refentrytitle>STORE
-     PATH</refentrytitle> </refmeta>
+     PATH</refentrytitle><manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>STORE PATH</refname>
     
@@ -707,7 +714,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.storepath-integer-integer-text-integer">. </para>
+    <para> This uses &funstorepath;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -721,7 +728,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdroppath"><refmeta><refentrytitle>DROP PATH</refentrytitle> </refmeta>
+  <refentry id="stmtdroppath"><refmeta><refentrytitle>DROP PATH</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP PATH</refname>
     
@@ -764,7 +772,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtstorelisten"><refmeta><refentrytitle>STORE LISTEN</refentrytitle> </refmeta>
+  <refentry id="stmtstorelisten"><refmeta><refentrytitle>STORE LISTEN</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>STORE LISTEN</refname>
     
@@ -813,8 +822,8 @@
      </varlistentry>
     </variablelist>
 
-    <para> This uses <xref linkend= "function.storelisten-integer-integer-integer">. </para>
-    <para> For more details, see <xref linkend="listenpaths">.</para>
+    <para> This uses &funstorelisten;. </para>
+    <para> For more details, see &rlistenpaths;.</para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -822,7 +831,11 @@
     </programlisting>
    </refsect1>
   </refentry>
-  <refentry id="stmtdroplisten"><refmeta><refentrytitle>DROP LISTEN</refentrytitle> </refmeta>
+
+<!-- **************************************** -->
+
+  <refentry id="stmtdroplisten"><refmeta><refentrytitle>DROP LISTEN</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP LISTEN</refname>
     
@@ -855,7 +868,7 @@
      </varlistentry>
     </variablelist>
     
-    <para> This uses <xref linkend= "function.droplisten-integer-integer-integer">. </para>
+    <para> This uses &fundroplisten;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -867,7 +880,8 @@
 
 <!-- **************************************** -->
 
-<refentry id="stmttableaddkey"><refmeta><refentrytitle>TABLE ADD KEY</refentrytitle> </refmeta>
+<refentry id="stmttableaddkey"><refmeta><refentrytitle>TABLE ADD KEY</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>TABLE ADD KEY</refname>
     
@@ -918,7 +932,7 @@
     we can't see there being terribly much interest in replicating
     tables that contain no application data.</para> </note>
     
-    <para> This uses <xref linkend= "function.tableaddkey-text">. </para>
+    <para> This uses &funtableaddkey;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -931,7 +945,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtcreateset"><refmeta><refentrytitle>CREATE SET</refentrytitle> </refmeta>
+  <refentry id="stmtcreateset"><refmeta><refentrytitle>CREATE SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>CREATE SET</refname>
     
@@ -977,7 +992,7 @@
      </varlistentry>
     </variablelist>
     
-    <para> This uses <xref linkend= "function.storeset-integer-text">. </para>
+    <para> This uses &funstoreset; . </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -991,7 +1006,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdropset"><refmeta><refentrytitle>DROP SET</refentrytitle> </refmeta>
+  <refentry id="stmtdropset"><refmeta><refentrytitle>DROP SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>DROP SET</refname>
     
@@ -1022,7 +1038,7 @@
      </varlistentry>
     </variablelist>
     
-       <para> This uses <xref linkend= "function.dropset-integer">. </para>
+       <para> This uses &fundropset;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1035,7 +1051,7 @@
 <!-- **************************************** -->
 
   <refentry id="stmtmergeset"><refmeta><refentrytitle>MERGE
-     SET</refentrytitle> </refmeta>
+     SET</refentrytitle><manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>MERGE SET</refname>
     
@@ -1073,7 +1089,7 @@
      </varlistentry>
     </variablelist>
     
-       <para> This uses <xref linkend= "function.mergeset-integer-integer">. </para>
+       <para> This uses &funmergeset;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1086,7 +1102,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtsetaddtable"><refmeta><refentrytitle>SET ADD TABLE</refentrytitle> </refmeta>
+  <refentry id="stmtsetaddtable"><refmeta><refentrytitle>SET ADD TABLE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET ADD TABLE</refname>
     
@@ -1124,7 +1141,11 @@
 	 these numbers should represent any applicable table hierarchy
 	 to make sure the <application>slonik</application> command
 	 scripts do not deadlock at any critical
-	 moment.</para></listitem>
+	 moment.</para>
+
+         <para> This ID must be unique across all sets; you cannot
+         have two tables in the same cluster with the same
+         ID. </para></listitem>
       </varlistentry>
       <varlistentry><term><literal> FULLY QUALIFIED NAME = 'string' </literal></term>
        <listitem><para> The full table name as described in
@@ -1147,7 +1168,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.setaddtable-integer-integer-text-name-text">. </para>
+    <para> This uses &funsetaddtable;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1164,7 +1185,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtsetaddsequence"><refmeta><refentrytitle>SET ADD SEQUENCE</refentrytitle> </refmeta>
+  <refentry id="stmtsetaddsequence"><refmeta><refentrytitle>SET ADD SEQUENCE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET ADD SEQUENCE</refname>
     
@@ -1214,7 +1236,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.setaddsequence-integer-integer-text-text">. </para>
+    <para> This uses &funsetaddsequence;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1231,7 +1253,8 @@
   
 <!-- **************************************** -->
 
-  <refentry id="stmtsetdroptable"><refmeta><refentrytitle>SET DROP TABLE</refentrytitle> </refmeta>
+  <refentry id="stmtsetdroptable"><refmeta><refentrytitle>SET DROP TABLE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET DROP TABLE</refname>
     
@@ -1265,7 +1288,7 @@
   <listitem><para> Unique ID of the table.</para></listitem></varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.setdroptable-integer">. </para>
+    <para> This uses &funsetdroptable;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1279,7 +1302,8 @@
   
 <!-- **************************************** -->
 
-  <refentry id="stmtsetdropsequence"><refmeta><refentrytitle>SET DROP SEQUENCE</refentrytitle> </refmeta>
+  <refentry id="stmtsetdropsequence"><refmeta><refentrytitle>SET DROP SEQUENCE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET DROP SEQUENCE</refname>
     
@@ -1309,7 +1333,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.setdropsequence-integer">. </para>
+    <para> This uses &funsetdropsequence;. </para>
    </refsect1>
 <refsect1><title>Example</title>
     <programlisting>
@@ -1324,7 +1348,7 @@
 <!-- **************************************** -->
   
   <refentry id="stmtsetmovetable"><refmeta><refentrytitle>SET MOVE
-     TABLE</refentrytitle> </refmeta>
+     TABLE</refentrytitle><manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET MOVE TABLE</refname>
 
@@ -1364,7 +1388,7 @@
   <listitem><para> Unique ID of the set to which the table should be added.</para></listitem></varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.setmovetable-integer-integer">. </para>
+    <para> This uses &funsetmovetable;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1380,7 +1404,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtsetmovesequence"><refmeta><refentrytitle>SET MOVE SEQUENCE</refentrytitle> </refmeta>
+  <refentry id="stmtsetmovesequence"><refmeta><refentrytitle>SET MOVE SEQUENCE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET MOVE SEQUENCE</refname>
     
@@ -1425,7 +1450,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.setmovesequence-integer-integer">. </para>
+    <para> This uses &funsetmovesequence;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1441,7 +1466,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtstoretrigger"><refmeta><refentrytitle>STORE TRIGGER</refentrytitle> </refmeta>
+  <refentry id="stmtstoretrigger"><refmeta><refentrytitle>STORE TRIGGER</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>STORE TRIGGER</refname>
     
@@ -1482,7 +1508,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.storetrigger-integer-name">. </para>
+    <para> This uses &funstoretrigger;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1496,7 +1522,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdroptrigger"><refmeta><refentrytitle>DROP TRIGGER</refentrytitle> </refmeta>
+  <refentry id="stmtdroptrigger"><refmeta><refentrytitle>DROP TRIGGER</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP TRIGGER</refname>
     
@@ -1534,7 +1561,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.droptrigger-integer-name">. </para>
+    <para> This uses &fundroptrigger;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1547,7 +1574,8 @@
   </refentry>
   
 <!-- **************************************** -->
-  <refentry id="stmtsubscribeset"><refmeta><refentrytitle>SUBSCRIBE SET</refentrytitle> </refmeta>
+  <refentry id="stmtsubscribeset"><refmeta><refentrytitle>SUBSCRIBE SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>SUBSCRIBE SET</refname>
     
@@ -1630,7 +1658,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.subscribeset-integer-integer-integer-boolean">. </para>
+    <para> This uses &funsubscribeset;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1646,7 +1674,8 @@
   
 <!-- **************************************** -->
 
-  <refentry id="stmtunsubscribeset"><refmeta><refentrytitle>UNSUBSCRIBE SET</refentrytitle> </refmeta>
+  <refentry id="stmtunsubscribeset"><refmeta><refentrytitle>UNSUBSCRIBE SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>UNSUBSCRIBE SET</refname>
 
@@ -1683,7 +1712,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.unsubscribeset-integer-integer">. </para>
+    <para> This uses &fununsubscribeset;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1698,7 +1727,8 @@
   
 <!-- **************************************** -->
 
-  <refentry id ="stmtlockset"><refmeta><refentrytitle>LOCK SET</refentrytitle> </refmeta>
+  <refentry id ="stmtlockset"><refmeta><refentrytitle>LOCK SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>LOCK SET</refname>
     
@@ -1725,9 +1755,17 @@
     transaction to the same database itself since this would result in
     blocking itself forever.</para>
 
-    <para> Note that this is a <link linkend="locking"> locking
-    operation, </link> which means that it can get stuck behind other
-    database activity.
+    <para> Note that this is a &rlocking; operation, which means that
+    it can get stuck behind other database activity.</para>
+
+    <para> The operation waits for transaction IDs to advance in order
+    that data is not missed on the new origin.  Thus, if you have
+    long-running transactions running on the source node, this
+    operation will wait for those transactions to complete.
+    Unfortunately, if you have another database on the same postmaster
+    as the origin node, long running transactions on that database
+    will also be considered even though they are essentially
+    independent.
      
      <variablelist>
       <varlistentry><term><literal> ID = ival </literal></term>
@@ -1741,7 +1779,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.lockset-integer">. </para>
+    <para> This uses &funlockset;. </para>
    </Refsect1>
    <Refsect1><Title>Example</Title>
     <Programlisting>
@@ -1755,7 +1793,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtunlockset"><refmeta><refentrytitle>UNLOCK SET</refentrytitle> </refmeta>
+  <refentry id="stmtunlockset"><refmeta><refentrytitle>UNLOCK SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>UNLOCK SET</refname>
     
@@ -1783,7 +1822,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.unlockset-integer">. </para>
+    <para> This uses &fununlockset;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1798,7 +1837,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtmoveset"><refmeta><refentrytitle>MOVE SET</refentrytitle> </refmeta>
+  <refentry id="stmtmoveset"><refmeta><refentrytitle>MOVE SET</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>MOVE SET</refname>
     
@@ -1834,9 +1874,8 @@
      <command>FAILOVER</command> winds up discarding the old origin
      node as being corrupted.</para>
      
-    <para> Note that this is a <link linkend="locking"> locking
-    operation, </link> which means that it can get stuck behind other
-    database activity.
+    <para> Note that this is a &rlocking; operation, which means that
+    it can get stuck behind other database activity.
      
      <variablelist>
       <varlistentry><term><literal> ID = ival </literal></term>
@@ -1855,7 +1894,7 @@
       </varlistentry>
      </variablelist>
     </para>
-    <para> This uses <xref linkend= "function.moveset-integer-integer">. </para>
+    <para> This uses &funmoveset;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1870,7 +1909,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtfailover"><refmeta><refentrytitle>FAILOVER</refentrytitle> </refmeta>
+  <refentry id="stmtfailover"><refmeta><refentrytitle>FAILOVER</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>FAILOVER</refname>
     
@@ -1924,7 +1964,7 @@
      </varlistentry>
     </variablelist>
     
-    <para> This uses <xref linkend= "function.failednode-integer-integer">. </para>
+    <para> This uses &funfailednode;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -1938,7 +1978,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtddlscript"><refmeta><refentrytitle>EXECUTE SCRIPT</refentrytitle> </refmeta>
+  <refentry id="stmtddlscript"><refmeta><refentrytitle>EXECUTE SCRIPT</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>EXECUTE SCRIPT</refname>
     
@@ -1957,12 +1998,13 @@
     
     <para> The specified event origin must be the origin of the set.
      The script file must not contain any <command>START</command> or
-     <command>COMMIT TRANSACTION</command> calls.  (This may change in
-     PostgreSQL 8.0 once nested transactions, aka savepoints, are
-     supported) In addition, non-deterministic DML statements (like
-     updating a field with <function>CURRENT_TIMESTAMP</function>) must
-     be avoided, since the data changes done by the script are
-     explicitly not replicated. </para>
+    <command>COMMIT TRANSACTION</command> calls.  (This changes
+    somewhat in &postgres; 8.0 once nested transactions, aka
+    savepoints, are supported) In addition, non-deterministic DML
+    statements (like updating a field with
+    <function>CURRENT_TIMESTAMP</function>) must be avoided, since the
+    data changes done by the script are explicitly not
+    replicated. </para>
 
     <variablelist>
      <varlistentry><term><literal> SET ID = ival </literal></term>
@@ -1995,11 +2037,10 @@
      </varlistentry>
     </variablelist>
     
-    <para> See also the warnings in <xref linkend="ddlchanges">.</para>
+    <para> See also the warnings in &rddlchanges;.</para>
 
-    <para> Note that this is a <link linkend="locking"> locking
-    operation, </link> which means that it can get stuck behind other
-    database activity.</para>
+    <para> Note that this is a &rlocking; operation, which means that
+    it can get stuck behind other database activity.</para>
      
     <para> At the start of this event, all tables in the specified set
     are unlocked via the function
@@ -2015,7 +2056,17 @@
     that the triggers be regenerated, otherwise they may be
     inappropriate for the new form of the table schema.</para>
 
-    <para> This uses <xref linkend= "function.ddlscript-integer-text-integer">. </para>
+    <para> Note that if you need to make reference to the cluster
+    name, you can use the token <command>@CLUSTERNAME@</command>; if
+    you need to make reference to the &slony1; namespace, you can use
+    the token <command>@NAMESPACE@</command>; both will be expanded
+    into the appropriate replacement tokens. </para>
+
+    <para> It generally seems a bad idea to use quotes in DDL scripts.
+    It appears preferable to handle that sort of thing <quote>out of
+    band.</quote> </para>
+
+    <para> This uses &funddlscript;. </para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -2030,7 +2081,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtupdatefunctions"><refmeta><refentrytitle>UPDATE FUNCTIONS</refentrytitle> </refmeta>
+  <refentry id="stmtupdatefunctions"><refmeta><refentrytitle>UPDATE FUNCTIONS</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>UPDATE FUNCTIONS</refname>
     
@@ -2070,7 +2122,8 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtwaitevent"><refmeta><refentrytitle>WAIT FOR EVENT</refentrytitle> </refmeta>
+  <refentry id="stmtwaitevent"><refmeta><refentrytitle>WAIT FOR EVENT</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>WAIT FOR EVENT</refname>
     
@@ -2112,7 +2165,7 @@
        
       </varlistentry>
       <varlistentry><term><literal> WAIT ON = ival </literal></term>
-       <listitem><para> The ID of the node where the <xref linkend="table.sl-confirm"> table
+       <listitem><para> The ID of the node where the &slconfirm; table
 	 is to be checked.  The default value is 1.</para></listitem>
        
       </varlistentry>
Index: bestpractices.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/bestpractices.sgml,v
retrieving revision 1.9.2.1
retrieving revision 1.9.2.2
diff -Ldoc/adminguide/bestpractices.sgml -Ldoc/adminguide/bestpractices.sgml -u -w -r1.9.2.1 -r1.9.2.2
--- doc/adminguide/bestpractices.sgml
+++ doc/adminguide/bestpractices.sgml
@@ -1,6 +1,7 @@
 <!-- $Id$ --> 
 <sect1 id="bestpractices">
 <title> &slony1; <quote>Best Practices</quote> </title>
+<indexterm><primary>best practices for &slony1; usage</primary></indexterm>
 
 <para> It is common for managers to have a desire to operate systems
 using some available, documented set of <quote>best practices.</quote>
@@ -81,9 +82,9 @@
 
 <para> At Afilias, some internal <citation>The 3AM Unhappy DBA's Guide
 to...</citation> guides have been created to provide checklists of
-what to do when <quote>unhappy</quote> things happen; this sort of
-material is highly specific to the applications running, so you would
-need to generate your own such documents.
+what to do when certain <quote>unhappy</quote> events take place; this
+sort of material is highly specific to the applications running, so
+you would need to generate your own such documents.
 </para>
 </listitem>
 
@@ -305,9 +306,9 @@
 <xref linkend="slonik"> scripts.</para>
 </listitem>
 
-<listitem><para> Handling Very Large Replication Sets </para></listitem>
+<listitem><para> Handling Very Large Replication Sets </para>
 
-<listitem><para> Some users have set up replication on replication sets that are
+<para> Some users have set up replication on replication sets that are
 tens to hundreds of gigabytes in size, which puts some added
 <quote>strain</quote> on the system, in particular where it may take
 several days for the <command>COPY_SET</command> event to complete.
@@ -315,6 +316,7 @@
 these sorts of situtations.</para></listitem>
 
 </itemizedlist>
+
 <itemizedlist>
 
 <listitem><para> Drop all indices other than the primary key index
@@ -326,8 +328,8 @@
 recreating each index <quote>ex nihilo</quote>, as the latter can take
 substantial advantage of sort memory. </para>
 
-<para> In version 1.2, indices will be dropped and recreated
-automatically, which would make this unnecessary.</para>
+<para> In a future release, it is hopeful that indices will be dropped
+and recreated automatically, which would eliminate this.</para>
 </listitem>
 
 <listitem><para> If there are large numbers of updates taking place as
@@ -349,12 +351,13 @@
 number of those 90,000 <command>SYNC</command> events, it still reads
 through the entire table.  In such a case, you may never see
 replication catch up.
-</para> </listitem>
-
-<listitem><para> Several things can be done that will help, involving
-careful selection of <xref linkend="slon"> parameters:</para></listitem>
+</para> 
 
+<para> Several things can be done that will help, involving
+careful selection of <xref linkend="slon"> parameters:</para>
+</listitem>
 </itemizedlist>
+
 <itemizedlist>
 
 <listitem><para> Ensure that there exists, on the
@@ -390,7 +393,6 @@
 vacuum scripts, as there will be a buildup of unpurgeable data while
 the data is copied and the subscriber starts to catch up. </para>
 </listitem>
-
 </itemizedlist>
 
 </sect1>
Index: slonconf.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonconf.sgml,v
retrieving revision 1.8
retrieving revision 1.8.2.1
diff -Ldoc/adminguide/slonconf.sgml -Ldoc/adminguide/slonconf.sgml -u -w -r1.8 -r1.8.2.1
--- doc/adminguide/slonconf.sgml
+++ doc/adminguide/slonconf.sgml
@@ -286,6 +286,97 @@
       </listitem>
     </varlistentry>
 
+    <varlistentry id="slon-config-quit-sync-provider" xreflabel="quit_sync_provider">
+      <term><varname>quit_sync_provider</varname>  (<type>integer</type>)</term>
+      <indexterm>
+        <primary><varname>quit_sync_provider</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para> This must be used in conjunction with <xref
+        linkend="slon-config-quit-sync-finalsync">, and indicates
+        which provider node's worker thread should be watched to see
+        if the slon should terminate due to reaching some desired
+        <quote>final</quote> event number.</para>
+
+	<para>If the value is set to 0, this logic will be ignored.</para>
+      </listitem>
+    </varlistentry>
+    <varlistentry id="slon-config-quit-sync-finalsync" xreflabel="quit_sync_finalsync">
+      <term><varname>quit_sync_finalsync</varname>  (<type>integer</type>)</term>
+      <indexterm>
+        <primary><varname>quit_sync_finalsync</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para>Final event number to process.  This must be used in
+        conjunction with <xref linkend="slon-config-quit-sync-finalsync">, and
+        allows the <application>slon</application> to terminate itself
+        once it reaches a certain event for the specified
+        provider. </para>
+
+	<para>If the value is set to 0, this logic will be ignored.
+        </para>
+      </listitem>
+    </varlistentry>
+
+    <varlistentry id="slon-config-lag-interval" xreflabel="lag_interval">
+      <term><varname>lag_interval</varname>  (<type>string/interval</type>)</term>
+      <indexterm>
+        <primary><varname>lag_interval</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para>Indicates an interval by which this node should lag its
+        providers.  If set, this is used in the event processing loop
+        to modify what events are to be considered for queueing; those
+        events newer than <command> now() - lag_interval::interval
+        </command> are left out, to be processed later.  </para>
+
+	<para>If the value is left empty, this logic will be ignored.
+        </para>
+      </listitem>
+    </varlistentry>
+
+    <varlistentry id="slon-config-max-rowsize" xreflabel="sync_max_rowsize">
+      <term><varname>sync_max_rowsize</varname>  (<type>integer</type>)</term>
+      <indexterm>
+        <primary><varname>sync_max_rowsize</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para>Size above which an sl_log_?  row's
+        <envar>log_cmddata</envar> is considered large.  Up to 500
+        rows of this size are allowed in memory at once. Rows larger
+        than that count into the <envar>sync_max_largemem</envar>
+        space allocated and <function>free()</function>'ed on demand.
+        </para>
+
+	<para>The default value is 8192, meaning that your expected
+	memory consumption (for the LOG cursor) should not exceed 8MB.
+        </para>
+      </listitem>
+    </varlistentry>
+
+    <varlistentry id="slon-config-max-largemem" xreflabel="sync_max_largemem">
+      <term><varname>sync_max_largemem</varname>  (<type>integer</type>)</term>
+      <indexterm>
+        <primary><varname>sync_max_largemem</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para>Maximum memory allocated for large rows, where
+        <envar>log_cmddata</envar> are larger than
+        <envar>sync_max_rowsize</envar>.  </para>
+
+	<para>Note that the algorithm reads rows until
+	<emphasis>after</emphasis> this value is exceeded.  Otherwise,
+	a tuple larger than this value would stall replication.  As a
+	result, don't assume that memory consumption will remain
+	smaller than this value.
+        </para>
+
+        <para> The default value is 5242880.</para>
+      </listitem>
+    </varlistentry>
+
+    
+
   </variablelist>
 </sect1>
 </article>
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.15
retrieving revision 1.15.2.1
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.15 -r1.15.2.1
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -2,7 +2,6 @@
 <sect1 id="concepts">
 <title>&slony1; Concepts</title>
 
-
 <para>In order to set up a set of &slony1; replicas, it is necessary
 to understand the following major abstractions that it uses:</para>
 
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.13
retrieving revision 1.13.2.1
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.13 -r1.13.2.1
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -3,19 +3,29 @@
 
 <indexterm><primary>replicating a first database</primary></indexterm>
 
-<para>In this example, we will be replicating a brand new pgbench
-database.  The mechanics of replicating an existing database are
-covered here, however we recommend that you learn how &slony1;
-functions by using a fresh new non-production database.</para>
+<para>In this example, we will be replicating a brand new
+<application>pgbench</application> database.  The mechanics of
+replicating an existing database are covered here, however we
+recommend that you learn how &slony1; functions by using a fresh new
+non-production database.</para>
+
+<para> Note that <application>pgbench</application> is a
+<quote>benchmark</quote> tool that is in the &postgres; set of
+<filename>contrib</filename> tools. If you build &postgres; from
+source, you can readily head to <filename>contrib/pgbench</filename>
+and do a <command>make install</command> to build and install it; you
+may discover that included in packaged binary &postgres;
+installations.</para>
 
 <para>The &slony1; replication engine is trigger-based, allowing us to
 replicate databases (or portions thereof) running under the same
 postmaster.</para>
 
-<para>This example will show how to replicate the pgbench database
-running on localhost (master) to the pgbench slave database also
-running on localhost (slave).  We make a couple of assumptions about
-your &postgres; configuration:
+<para>This example will show how to replicate the
+<application>pgbench</application> database running on localhost
+(master) to the <application>pgbench</application> slave database also running on localhost
+(slave).  We make a couple of assumptions about your &postgres;
+configuration:
 
 <itemizedlist>
 
@@ -69,7 +79,7 @@
 </warning></para>
 
 
-<sect2><title>Creating the pgbenchuser</title>
+<sect2><title>Creating the <application>pgbench</application> user</title>
 
 <para><command>
 createuser -A -D $PGBENCHUSER
@@ -131,7 +141,7 @@
 tool. It is a specialized scripting aid that mostly calls stored
 procedures in the master/slave (node) databases.  The script to create
 the initial configuration for the simple master-slave setup of our
-pgbench database looks like this:
+<application>pgbench</application> database looks like this:
 
 <programlisting>
 #!/bin/sh
@@ -194,8 +204,8 @@
 _EOF_
 </programlisting></para>
 
-<para>Is the <application>pgbench</application> still running?  If not start it
-again.</para>
+<para>Is the <application>pgbench</application> still running?  If
+not, then start it again.</para>
 
 <para>At this point we have 2 databases that are fully prepared.  One
 is the master database in which <application>pgbench</application> is
@@ -220,9 +230,9 @@
 are seeing is the synchronization of cluster configurations between
 the 2 <xref linkend="slon"> processes.</para>
 
-<para>To start replicating the 4 pgbench tables (set 1) from the
-master (node id 1) the the slave (node id 2), execute the following
-script.
+<para>To start replicating the 4 <application>pgbench</application>
+tables (set 1) from the master (node id 1) the the slave (node id 2),
+execute the following script.
 
 <programlisting>
 #!/bin/sh
@@ -248,12 +258,14 @@
 _EOF_
 </programlisting>
 </para>
-<para>Any second now, the replication daemon on <envar>$SLAVEHOST</envar> will start
-to copy the current content of all 4 replicated tables.  While doing
-so, of course, the pgbench application will continue to modify the
-database.  When the copy process is finished, the replication daemon
-on <envar>$SLAVEHOST</envar> will start to catch up by applying the
-accumulated replication log.  It will do this in little steps, 10
+
+<para>Any second now, the replication daemon on
+<envar>$SLAVEHOST</envar> will start to copy the current content of
+all 4 replicated tables.  While doing so, of course, the
+<application>pgbench</application> application will continue to modify
+the database.  When the copy process is finished, the replication
+daemon on <envar>$SLAVEHOST</envar> will start to catch up by applying
+the accumulated replication log.  It will do this in little steps, 10
 seconds worth of application work at a time.  Depending on the
 performance of the two systems involved, the sizing of the two
 databases, the actual transaction load and how well the two databases
@@ -267,8 +279,10 @@
 are in fact the same.</para>
 
 <para>The following script will create ordered dumps of the 2
-databases and compare them.  Make sure that <application>pgbench</application> has
-completed its testing, and that your slon sessions have caught up.
+databases and compare them.  Make sure that
+<application>pgbench</application> has completed, so that there are no
+new updates hitting the origin node, and that your slon sessions have
+caught up.
 
 <programlisting>
 #!/bin/sh
Index: prerequisites.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v
retrieving revision 1.18
retrieving revision 1.18.2.1
diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.18 -r1.18.2.1
--- doc/adminguide/prerequisites.sgml
+++ doc/adminguide/prerequisites.sgml
@@ -6,30 +6,10 @@
 
 <para>The platforms that have received specific testing at the time of
 this release are FreeBSD-4X-i368, FreeBSD-5X-i386, FreeBSD-5X-alpha,
-osX-10.3, Linux-2.4X-i386 Linux-2.6X-i386 Linux-2.6X-amd64,
+OS-X-10.3, Linux-2.4X-i386 Linux-2.6X-i386 Linux-2.6X-amd64,
 <trademark>Solaris</trademark>-2.8-SPARC,
-<trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1 and
-OpenBSD-3.5-sparc64.</para>
-
-<para>There have been reports of success at running &slony1; hosts
-that are running PostgreSQL on Microsoft
-<trademark>Windows</trademark>.  At this time, the
-<quote>binary</quote> applications (<emphasis>e.g.</emphasis> - <xref
-linkend="slonik">, <xref linkend="slon">) do not run on
-<trademark>Windows</trademark>, but a <xref linkend="slon"> running on
-one of the Unix-like systems has no reason to have difficulty
-connecting to a PostgreSQL instance running on
-<trademark>Windows</trademark>. </para>
-
-<para> It ought to be possible to port <xref linkend="slon"> and <xref
-linkend="slonik"> to run on <trademark>Windows</trademark>; the
-conspicuous challenge is of having a POSIX-like
-<filename>pthreads</filename> implementation for <xref
-linkend="slon">, as it uses that to have multiple threads of
-execution.  There are reports of there being a
-<filename>pthreads</filename> library for
-<trademark>Windows</trademark>, so nothing should prevent some
-interested party from volunteering to do the port.</para>
+<trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1, OpenBSD-3.5-sparc64
+and &windows; 2000, XP and 2003 (32 bit).</para>
 
 <sect2>
 <title> &slony1; Software Dependancies</title>
@@ -38,7 +18,7 @@
 need to be able to be compiled from source at your site.</para>
 
 <para> In order to compile &slony1;, you need to have the following
-tools.
+tools:
 
 <itemizedlist>
 <listitem><para> GNU make.  Other make programs will not work.  GNU
@@ -98,20 +78,25 @@
 </ulink> along with <ulink url="http://openjade.sourceforge.net/">
 OpenJade.</ulink> </para></listitem>
 
+<listitem><para> On &windows; you will also need the same <ulink url=
+"http://www.postgresql.org/docs/faqs.FAQ_MINGW.html">MinGW/Msys
+Toolset</ulink> used to build &postgres; 8.0 and above.  In addition
+you will need to install <ulink url=
+"http://sourceware.org/pthreads-win32/">pthreads-win32
+2.x</ulink>. </para></listitem>
+
 </itemizedlist> </para>
 
 <para>Also check to make sure you have sufficient disk space.  You
 will need approximately 5MB for the source tree during build and
 installation.</para>
 
-<note><para>There are changes afoot for version 1.1 that ought to make
-it possible to compile &slony1; separately from &postgres;, which
-should make it practical for the makers of distributions of
-<productname>Linux</productname> and
+<note><para>In &slony1; version 1.1, it is possible to compile
+&slony1; separately from &postgres;, making it practical for the
+makers of distributions of <productname>Linux</productname> and
 <productname>FreeBSD</productname> to include precompiled binary
-packages for &slony1;, but until that happens, you need to be prepared
-to use versions of all this software that you compile
-yourself.</para></note>
+packages for &slony1;.  If no suitable packages are available, you
+will need to be prepared to compile &slony1; yourself.  </para></note>
 </sect2>
 
 <sect2>
@@ -130,10 +115,12 @@
 <para> All the servers used within the replication cluster need to
 have their Real Time Clocks in sync. This is to ensure that <xref
 linkend="slon"> doesn't generate errors with messages indicating that
-a subscriber is already ahead of its provider during replication.  We
-recommend you have <application>ntpd</application> running on all
-nodes, where subscriber nodes using the <quote>master</quote> provider
-host as their time server.</para>
+a subscriber is already ahead of its provider during replication.
+Interpreting logs when servers have a different idea of what time it
+is leads to confusion and frustration.  It is recommended that you
+have <application>ntpd</application> running on all nodes, where
+subscriber nodes using the <quote>master</quote> provider host as
+their time server.</para>
 
 <para> It is possible for &slony1; itself to function even in the face
 of there being some time discrepancies, but having systems <quote>in
Index: subscribenodes.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.sgml,v
retrieving revision 1.12
retrieving revision 1.12.2.1
diff -Ldoc/adminguide/subscribenodes.sgml -Ldoc/adminguide/subscribenodes.sgml -u -w -r1.12 -r1.12.2.1
--- doc/adminguide/subscribenodes.sgml
+++ doc/adminguide/subscribenodes.sgml
@@ -48,7 +48,8 @@
 subscribe it.</para>
 
 <para>When you subscribe a node to a set, you should see something
-like this in your <application>slon</application> logs for the provider node:
+like this in your <application>slon</application> logs for the
+provider node:
 
 <screen>
 DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET
@@ -79,6 +80,50 @@
 tables on the origin node, and verify that the row is copied to the
 new subscriber.
 </para>
+
+<warning> <para> If you create and subscribe a set that does not
+contain any tables, that can lead to a problem that will stop
+replication from proceeding. </para>
+
+<para> If a particular subscriber is only being fed sequences by one
+of its providers, the query that collects <command>SYNC</command>
+event data will not be constructed correctly, and you will see error
+messages similar to the following:</para>
+
+<screen>
+2005-04-13 07:11:28 PDT ERROR remoteWorkerThread_11: "declare LOG
+cursor for select log_origin, log_xid, log_tableid, log_actionseq,
+log_cmdtype, log_cmddata from "_T1".sl_log_1 where log_origin = 11 and
+( order by log_actionseq; " PGRES_FATAL_ERROR ERROR: syntax error at
+or near "order" at character 162
+</screen>
+
+<para> The function <xref
+linkend="function.subscribeset-integer-integer-integer-boolean"> will
+generate a warning if given a replication set that lacks any tables to
+replicate, as shown in the following example.</para>
+
+<screen>
+cbbrowne at dba2:/tmp> cat create_empty_set.slonik
+cluster name = T1;
+node 11 admin conninfo = 'dbname=slony_test1';
+node 22 admin conninfo = 'dbname=slony_test2';
+
+create set (id = 255, origin = 11, comment='blank empty set');
+subscribe set (id=255, provider = 11, receiver = 22, forward = false);
+</screen>
+
+<para> This leads to the following warning message: </para>
+
+<screen>
+bbrowne at dba2:/tmp> slonik create_empty_set.slonik
+create_empty_set.slonik:6: NOTICE:  subscribeSet:: set 255 has no tables
+- risk of problems - see bug 1226
+create_empty_set.slonik:6: NOTICE: 
+http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1226
+cbbrowne at dba2:/tmp>
+</screen>
+</warning>
 </sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
Index: legal.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/legal.sgml,v
retrieving revision 1.6
retrieving revision 1.6.2.1
diff -Ldoc/adminguide/legal.sgml -Ldoc/adminguide/legal.sgml -u -w -r1.6 -r1.6.2.1
--- doc/adminguide/legal.sgml
+++ doc/adminguide/legal.sgml
@@ -23,11 +23,11 @@
  </para>
 
  <para>
-  IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY
-  PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
-  DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS
-  SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA
-  HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY
+FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,
+INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND
+ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN
+ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  </para>
 
  <para>
@@ -40,7 +40,7 @@
  </para>
 
  <para> Note that <trademark>UNIX</trademark> is a registered trademark of The
- Open Group.  <trademark>Windows</trademark> is a registered trademark of
+ Open Group.  &windows; is a registered trademark of
  Microsoft Corporation in the United States and other countries.
  <trademark>Solaris</trademark> is a registered trademark of Sun Microsystems,
  Inc. <trademark>Linux</trademark> is a trademark of Linus Torvalds. 
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.40.2.2
retrieving revision 1.40.2.3
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.40.2.2 -r1.40.2.3
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -47,7 +47,26 @@
 
 <screen>
 DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432'
-ERROR  remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp,		  ev_minxid, ev_maxxid, ev_xip,		  ev_type,		  ev_data1, ev_data2,		  ev_data3, ev_data4,		  ev_data5, ev_data6,		  ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress
+ERROR  remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp,
+		  ev_minxid, ev_maxxid, ev_xip,
+		  ev_type,
+                  ev_data1, ev_data2,
+		  ev_data3, ev_data4,
+ 	          ev_data5, ev_data6,
+		  ev_data7, ev_data8 from "_pgbenchtest".sl_event e 
+where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress
+</screen>
+</para>
+
+<para> Alternatively, it may appear like...
+
+<screen>
+ERROR  remoteListenThread_2: "select ev_origin, ev_seqno, ev_timestamp,
+ev_minxid, ev_maxxid, ev_xip,        ev_type,        ev_data1, ev_data2,
+ev_data3, ev_data4,        ev_data5, ev_data6,        ev_data7, ev_data8
+from "_sl_p2t2".sl_event e where (e.ev_origin = '2' and e.ev_seqno >
+'0') order by e.ev_origin, e.ev_seqno" - could not receive data from
+server: Error 0
 </screen>
 </para>
 </question>
@@ -101,25 +120,26 @@
 <question><para> <xref linkend="slon"> does not restart after
 crash</para>
 
-<para> After an immediate stop of postgresql (simulation of system
-crash) in pg_catalog.pg_listener a tuple with
-relname='_${cluster_name}_Restart' exists. slon doesn't start because
-it thinks another process is serving the cluster on this node.  What
-can I do? The tuples can't be dropped from this relation.</para>
+<para> After an immediate stop of &postgres; (simulation of system
+crash) in <envar>pg_catalog.pg_listener</envar> a tuple with <command>
+relname='_${cluster_name}_Restart'</command> exists. slon doesn't
+start because it thinks another process is serving the cluster on this
+node.  What can I do? The tuples can't be dropped from this
+relation.</para>
 
 <para> The logs claim that <blockquote><para>Another slon daemon is
 serving this node already</para></blockquote></para></question>
 
 <answer><para> The problem is that the system table
-<envar>pg_catalog.pg_listener</envar>, used by
-<productname>PostgreSQL</productname> to manage event notifications,
-contains some entries that are pointing to backends that no longer
-exist.  The new <xref linkend="slon"> instance connects to the
-database, and is convinced, by the presence of these entries, that an
-old <application>slon</application> is still servicing this &slony1;
-node.</para>
+<envar>pg_catalog.pg_listener</envar>, used by &postgres; to manage
+event notifications, contains some entries that are pointing to
+backends that no longer exist.  The new <xref linkend="slon"> instance
+connects to the database, and is convinced, by the presence of these
+entries, that an old <application>slon</application> is still
+servicing this &slony1; node.</para>
 
-<para> The <quote>trash</quote> in that table needs to be thrown away.</para>
+<para> The <quote>trash</quote> in that table needs to be thrown
+away.</para>
 
 <para>It's handy to keep a slonik script similar to the following to
 run in such cases:
@@ -142,6 +162,13 @@
 
 <para>As of version 1.0.5, the startup process of slon looks for this
 condition, and automatically cleans it up.</para>
+
+<para> As of version 8.1 of &postgres;, the functions that manipulate
+<envar>pg_listener</envar> do not support this usage, so for &slony1;
+versions after 1.1.2 (<emphasis>e.g. - </emphasis> 1.1.5), this
+<quote>interlock</quote> behaviour is handled via a new table, and the
+issue should be transparently <quote>gone.</quote> </para>
+
 </answer></qandaentry>
 
 <qandaentry>
@@ -155,8 +182,7 @@
 </answer></qandaentry>
 
 <qandaentry>
-<question><para>Slonik fails - cannot load
-<productname>PostgreSQL</productname> library -
+<question><para>Slonik fails - cannot load &postgres; library -
 <command>PGRES_FATAL_ERROR load '$libdir/xxid';</command></para>
 
 <para> When I run the sample setup script I get an error message similar
@@ -188,7 +214,14 @@
 machine(s).  Unfortunately, just about any mismatch will cause things
 not to link up quite right.  See also <link linkend="threadsafety">
 thread safety </link> concerning threading issues on Solaris
-...</para> </answer></qandaentry>
+...</para> 
+
+<para> Life is simplest if you only have one set of &postgres;
+binaries on a given server; in that case, there isn't a <quote>wrong
+place</quote> in which &slony1; components might get installed.  If
+you have several software installs, you'll have to verify that the
+right versions of &slony1; components are associated with the right
+&postgres; binaries. </para> </answer></qandaentry>
 
 <qandaentry>
 <question><para>Table indexes with FQ namespace names
@@ -972,13 +1005,11 @@
 </answer>
 
 <answer> <para> You can monitor for this condition inside the database
-only if the <productname> PostgreSQL </productname> <filename>
-postgresql.conf </filename> parameter
-<envar>stats_command_string</envar> is set to true.  If that is set,
-then you may submit the query <command> select * from pg_stat_activity
-where current_query like '%IDLE% in transaction'; </command> which
-will find relevant activity.
-</para> </answer>
+only if the &postgres; <filename> postgresql.conf </filename>
+parameter <envar>stats_command_string</envar> is set to true.  If that
+is set, then you may submit the query <command> select * from
+pg_stat_activity where current_query like '%IDLE% in transaction';
+</command> which will find relevant activity.  </para> </answer>
 
 <answer> <para> You should also be able to search for <quote> idle in
 transaction </quote> in the process table to find processes that are
@@ -990,10 +1021,9 @@
 pg_stat_activity </envar> may show you some query that has been
 running way too long.  </para> </answer>
 
-<answer> <para> There are plans for <productname> PostgreSQL
-</productname> to have a timeout parameter, <envar>
-open_idle_transaction_timeout </envar>, which would cause old
-transactions to time out after some period of disuse.  Buggy
+<answer> <para> There are plans for &postgres; to have a timeout
+parameter, <envar> open_idle_transaction_timeout </envar>, which would
+cause old transactions to time out after some period of disuse.  Buggy
 connection pool logic is a common culprit for this sort of thing.
 There are plans for <productname> <link linkend="pgpool"> pgpool
 </link> </productname> to provide a better alternative, eventually,
@@ -1030,7 +1060,7 @@
 <itemizedlist>
 
 <listitem><para> You'll need to identify from either the slon logs, or
-the PostgreSQL database logs exactly which statement it is that is
+the &postgres; database logs exactly which statement it is that is
 causing the error.</para></listitem>
 
 <listitem><para> You need to fix the Slony-defined triggers on the
@@ -1141,12 +1171,12 @@
 and load the data back in much faster than the <command>SUBSCRIBE
 SET</command> runs.  Why is that?  </para></question>
 
-<answer><para> &slony1; depends on there
-being an already existant index on the primary key, and leaves all
-indexes alone whilst using the <productname>PostgreSQL</productname>
-<command>COPY</command> command to load the data.  Further hurting
-performane, the <command>COPY SET</command> event starts by deleting
-the contents of tables, which potentially leaves a lot of dead tuples
+<answer><para> &slony1; depends on there being an already existant
+index on the primary key, and leaves all indexes alone whilst using
+the &postgres; <command>COPY</command> command to load the data.
+Further hurting performane, the <command>COPY SET</command> event
+starts by deleting the contents of tables, which potentially leaves a
+lot of dead tuples
 </para>
 
 <para> When you use <command>pg_dump</command> to dump the contents of
@@ -1154,11 +1184,7 @@
 the very end.  It is <emphasis>much</emphasis> more efficient to
 create indexes against the entire table, at the end, than it is to
 build up the index incrementally as each row is added to the
-table.</para>
-
-<para> Unfortunately, dropping and recreating indexes <quote>on the
-fly,</quote> as it were, has proven thorny.  Doing it automatically
-hasn't been implemented.  </para></answer>
+table.</para></answer>
 
 <answer><para> If you can drop unnecessary indices while the
 <command>COPY</command> takes place, that will improve performance
@@ -1166,13 +1192,11 @@
 contain data that is about to be eliminated, that will improve
 performance <emphasis>a lot.</emphasis> </para></answer>
 
-<answer><para> There is a TODO item for implementation in
-<productname>PostgreSQL</productname> that adds a new option,
-something like <option>BULKLOAD</option>, which would defer revising
-indexes until the end, and regenerating indexes in bulk.  That will
-likely not be available until <productname>PostgreSQL</productname>
-8.1, but it should substantially improve performance once available.
-</para></answer>
+<answer><para> &slony1; version 1.1.5 and later versions should handle
+this automatically; it <quote>thumps</quote> on the indexes in the
+&postgres; catalog to hide them, in much the same way triggers are
+hidden, and then <quote>fixes</quote> the index pointers and reindexes
+the table. </para> </answer>
 </qandaentry>
 
 <qandaentry>
@@ -1562,7 +1586,7 @@
 <para> Of course, now that you have done all of the above, it's not compatible
 with standard Slony now. So you either need to implement 7.2 in a less
 hackish way, or you can also hack up slony to work without schemas on
-newer versions of PostgreSQL so they can talk to each other.
+newer versions of &postgres; so they can talk to each other.
 </para>
 <para> Almost immediately after getting the DB upgraded from 7.2 to 7.4, we
 deinstalled the hacked up Slony (by hand for the most part), and started
@@ -1592,6 +1616,58 @@
 </qandaentry>
 
 <qandaentry>
+<question> <para> I am finding some multibyte columns (Unicode, Big5)
+are being truncated.  Why?  </para> </question>
+
+<answer> <para> This was a bug present until a little after &slony1;
+version 1.1.0; the way in which columns were being captured by the
+<function>logtrigger()</function> function could clip off the last
+byte of a column represented in a multibyte format.  Check to see that
+your version of <filename>src/backend/slony1_funcs.c</filename> is
+1.34 or better; the patch was introduced in CVS version 1.34 of that
+file.  </para> </answer>
+</qandaentry>
+
+<qandaentry><question> <para> I need to rename a column that is in the
+primary key for one of my replicated tables.  That seems pretty
+dangerous, doesn't it?  I have to drop the table out of replication
+and recreate it, right?</para>
+</question>
+
+<answer><para> Actually, this is a scenario which works out remarkably
+cleanly.  &slony1; does indeed make intense use of the primary key
+columns, but actually does so in a manner that allows this sort of
+change to be made very nearly transparently.</para>
+
+<para> Suppose you revise a column name, as with the SQL DDL <command>
+alter table accounts alter column aid rename to cid; </command> This
+revises the names of the columns in the table; it
+<emphasis>simultaneously</emphasis> renames the names of the columns
+in the primary key index.  The result is that the normal course of
+things is that altering a column name affects both aspects
+simultaneously on a given node.</para>
+
+<para> The <emphasis>ideal</emphasis> and proper handling of this
+change would involve using <xref linkend="stmtddlscript"> to deploy
+the alteration, which ensures it is applied at exactly the right point
+in the transaction stream on each node.</para>
+
+<para> Interestingly, that isn't forcibly necessary.  As long as the
+alteration is applied on the replication set's origin before
+application on subscribers, things won't break irrepairably.  Some
+<command>SYNC</command> events that do not include changes to the
+altered table can make it through without any difficulty...  At the
+point that the first update to the table is drawn in by a subscriber,
+<emphasis>that</emphasis> is the point at which
+<command>SYNC</command> events will start to fail, as the provider
+will indicate the <quote>new</quote> set of columns whilst the
+subscriber still has the <quote>old</quote> ones.  If you then apply
+the alteration to the subscriber, it can retry the
+<command>SYNC</command>, at which point it will, finding the
+<quote>new</quote> column names, work just fine.
+</para> </answer></qandaentry>
+
+<qandaentry>
 <question> <para> Replication has fallen behind, and it appears that the
 queries to draw data from <xref linkend="table.sl-log-1">/<xref
 linkend="table.sl-log-2"> are taking a long time to pull just a few
@@ -1608,7 +1684,6 @@
 </para>
 </answer>
 </qandaentry>
-
 <qandaentry>
 <question><para>The <xref linkend="slon"> processes servicing my
 subscribers are growing to enormous size, challenging system resources
@@ -1656,10 +1731,34 @@
 default modification to make is to change the second definition of
 <envar> SLON_DATA_FETCH_SIZE </envar> from 10 to 1. </para> </answer>
 
-<answer><para> There are plans to change the behaviour of <xref
-linkend="slon"> to better adapt to the size of these queries for
-version 1.2, so at that point, this note will hopefully become
-obsolete. </para> </answer>
+<answer><para> In version 1.2, configuration values <xref
+linkend="slon-config-max-rowsize"> and <xref
+linkend="slon-config-max-largemem"> are associated with a new
+algorithm that changes the logic as follows.  Rather than fetching 100
+rows worth of data at a time:</para>
+
+<itemizedlist>
+
+<listitem><para> The <command>fetch from LOG</command> query will draw
+in 500 rows at a time where the size of the attributes does not exceed
+<xref linkend="slon-config-max-rowsize">.  With default values, this
+restricts this aspect of memory consumption to about 8MB.  </para>
+</listitem>
+
+<listitem><para> Tuples with larger attributes are loaded until
+aggregate size exceeds the parameter <xref
+linkend="slon-config-max-largemem">.  By default, this restricts
+consumption of this sort to about 5MB.  This value is not a strict
+upper bound; if you have a tuple with attributes 50MB in size, it
+forcibly <emphasis>must</emphasis> be loaded into memory.  There is no
+way around that.  But <xref linkend="slon"> at least won't be trying
+to load in 100 such records at a time, chewing up 10GB of memory by
+the time it's done.  </para> </listitem>
+</itemizedlist>
+
+<para> This should alleviate problems people have been experiencing
+when they sporadically have series' of very large tuples. </para>
+</answer>
 </qandaentry>
 </qandaset>
 
Index: usingslonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/usingslonik.sgml,v
retrieving revision 1.11
retrieving revision 1.11.2.1
diff -Ldoc/adminguide/usingslonik.sgml -Ldoc/adminguide/usingslonik.sgml -u -w -r1.11 -r1.11.2.1
--- doc/adminguide/usingslonik.sgml
+++ doc/adminguide/usingslonik.sgml
@@ -44,7 +44,7 @@
 <itemizedlist>
 <listitem><para> Named nodes, named sets</para>
 
-<para> This is supported by the (new in 1.1) <xref
+<para> This is supported in &slony1; 1.1 by the <xref
       linkend="stmtdefine"> and <xref linkend="stmtinclude"> statements.
 </para></listitem>
 
@@ -69,12 +69,13 @@
 <para> The test bed found in the <filename>src/ducttape</filename>
 directory takes this approach.</para></listitem>
 
-<listitem><para> The <xref linkend="altperl"> use Perl code to
-generate Slonik scripts.</para>
+<listitem><para> The <link linkend="altperl"> altperl tools </link>
+use Perl code to generate Slonik scripts.</para>
 
-<para> You define the cluster as a set of Perl objects; each script
-walks through the Perl objects as needed to satisfy whatever it is
-supposed to do.  </para></listitem>
+<para> You define the cluster's configuration as a set of Perl
+objects; each script walks through the Perl objects as needed to
+generate a slonik script for that script's purpose.
+</para></listitem>
 
 </itemizedlist>
 </sect1>
@@ -98,14 +99,16 @@
 	node 2 admin conninfo = 'dbname=$DB2';
 
 	try {
-		table add key (node id = 1, fully qualified name = 'public.history');
+		table add key (node id = 1, fully qualified name = 
+                               'public.history');
 	}
 	on error {
 		exit 1;
 	}
 
 	try {
-		create set (id = 1, origin = 1, comment = 'Set 1 - pgbench tables');
+		create set (id = 1, origin = 1, comment = 
+                            'Set 1 - pgbench tables');
 		set add table (set id = 1, origin = 1,
 			id = 1, fully qualified name = 'public.accounts',
 			comment = 'Table accounts');
@@ -157,12 +160,14 @@
 slonik <<_EOF_
 $PREAMBLE
 try {
-    table add key (node id = $origin, fully qualified name = 'public.history');
+    table add key (node id = $origin, fully qualified name = 
+                   'public.history');
 } on error {
     exit 1;
 }
 try {
-	create set (id = $mainset, origin = $origin, comment = 'Set $mainset - pgbench tables');
+	create set (id = $mainset, origin = $origin, 
+                    comment = 'Set $mainset - pgbench tables');
 	set add table (set id = $mainset, origin = $origin,
 		id = 1, fully qualified name = 'public.accounts',
 		comment = 'Table accounts');
@@ -204,12 +209,14 @@
 slonik <<_EOF_
 $PREAMBLE
 try {
-    table add key (node id = $origin, fully qualified name = 'public.history');
+    table add key (node id = $origin, fully qualified name = 
+                   'public.history');
 } on error {
     exit 1;
 }
 try {
-	create set (id = $mainset, origin = $origin, comment = 'Set $mainset - pgbench tables');
+	create set (id = $mainset, origin = $origin, 
+                    comment = 'Set $mainset - pgbench tables');
 $ADDTABLES
 } on error {
 	exit 1;
@@ -269,14 +276,17 @@
 <para> The developers of &slony1; anticipate that interested parties
 may wish to develop graphical tools as an alternative to Slonik; it
 would be entirely appropriate in such cases to submit configuration
-requests directly via the stored functions.</para>
+requests directly via the stored functions.  If you plan to do so, it
+is suggested that you examine how the stored procedures get used in
+<filename>slonik.c</filename>, as that should be the most
+representative way of seeing correct use of the functions.</para>
 
 <para> When debugging problems in <quote>troubled</quote> &slony1;
 clusters, it has also occasionally proven useful to use the stored
 functions.  This has been particularly useful for cases where <xref
-       linkend="table.sl-listen"> configuration has been broken, and
-events have not been propagating properly.  The <quote>easiest</quote>
-fix was to:</para>
+linkend="table.sl-listen"> configuration has been broken, and events
+have not been propagating properly.  The <quote>easiest</quote> fix
+was to:</para>
 
 <para> <command> select
 _slonycluster.droplisten(li_origin,li_provider,li_receiver) from
Index: reshape.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v
retrieving revision 1.14
retrieving revision 1.14.2.1
diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.14 -r1.14.2.1
--- doc/adminguide/reshape.sgml
+++ doc/adminguide/reshape.sgml
@@ -24,16 +24,15 @@
 doubtless appropriate to issue a set of <xref
 linkend="stmtdroplisten"> operations to drop out obsolete paths
 between nodes and <xref linkend="stmtstorelisten"> to add the new
-ones.  At present, this is not changed automatically; at some point,
-<xref linkend="stmtmoveset"> and <xref
-linkend="stmtsubscribeset"> might change the paths as a side-effect.
-See <xref linkend="listenpaths"> for more information about this.  In
-version 1.1 and later, it is likely that the generation of <xref
-linkend="table.sl-listen"> entries will be entirely automated, where
-they will be regenerated when changes are made to <xref
-linkend="table.sl-path"> or <xref linkend="table.sl-path">, thereby
-making it unnecessary to even think about <xref
-linkend="stmtstorelisten">.</para></listitem>
+ones.  Up until version 1.1, this was not changed automatically; as of
+1.1, <xref linkend="stmtmoveset"> and <xref
+linkend="stmtsubscribeset"> change the paths as a side-effect.  See
+<xref linkend="listenpaths"> for more information about this.  In
+version 1.1 and later, generation of <xref linkend="table.sl-listen">
+entries is entirely automated, so that they are regenerated when
+changes are made to <xref linkend="table.sl-path"> or <xref
+linkend="table.sl-path">, thereby making it unnecessary to even think
+about <xref linkend="stmtstorelisten">.</para></listitem>
 
 </itemizedlist>
 </para>
Index: startslons.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/startslons.sgml,v
retrieving revision 1.11
retrieving revision 1.11.2.1
diff -Ldoc/adminguide/startslons.sgml -Ldoc/adminguide/startslons.sgml -u -w -r1.11 -r1.11.2.1
--- doc/adminguide/startslons.sgml
+++ doc/adminguide/startslons.sgml
@@ -6,11 +6,15 @@
 
 <para>You need to run one <xref linkend="slon"> instance for each node
 in a &slony1; cluster, whether you consider that node a
-<quote>master</quote> or a <quote>slave</quote>. Since a <command>MOVE
-SET</command> or <command>FAILOVER</command> can switch the roles of
-nodes, slon needs to be able to function for both providers and
-subscribers.  It is not essential that these daemons run on any
-particular host, but there are some principles worth considering:
+<quote>master</quote> or a <quote>slave</quote>. On &windows; when
+running as a service things are slightly different. One slon service
+is installed, and a seperate configuration file registered for each
+node to be serviced by that machine. The main service then manages the
+individual slons itself. Since a <command>MOVE SET</command> or
+<command>FAILOVER</command> can switch the roles of nodes, slon needs
+to be able to function for both providers and subscribers.  It is not
+essential that these daemons run on any particular host, but there are
+some principles worth considering:
 
 <itemizedlist>
 
Index: intro.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v
retrieving revision 1.17
retrieving revision 1.17.2.1
diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.17 -r1.17.2.1
--- doc/adminguide/intro.sgml
+++ doc/adminguide/intro.sgml
@@ -3,15 +3,15 @@
 <title>Introduction to &slony1;</title>
 <sect2> <title>What &slony1; is</title>
 
-<para>&slony1; is a <quote>master to
-multiple slaves</quote> replication system supporting cascading and
-slave promotion.  The big picture for the development of
-&slony1; is as a master-slave system that
-includes all features and capabilities needed to replicate large
+<para>&slony1; is a <quote>master to multiple slaves</quote>
+replication system supporting cascading and slave promotion.  The big
+picture for the development of &slony1; is as a master-slave system
+that includes the sorts of capabilities needed to replicate large
 databases to a reasonably limited number of slave systems.
-<quote>Reasonable,</quote> in this context, is probably no more than a
-few dozen servers.  If the number of servers grows beyond that, the
-cost of communications becomes prohibitively high.</para>
+<quote>Reasonable,</quote> in this context, is on the order of a dozen
+servers.  If the number of servers grows beyond that, the cost of
+communications increases prohibitively, and the incremental benefits
+of having multiple servers will be falling off at that point.</para>
 
 <para> See also <xref linkend="slonylistenercosts"> for a further
 analysis of costs associated with having many nodes.</para>
@@ -29,17 +29,27 @@
 <itemizedlist>
 <listitem><para> Sites where connectivity is really <quote>flakey</quote>
 </para></listitem>
-<listitem><para> Replication to nodes that are unpredictably connected.</para>
+<listitem><para> Replication to nodes that are unpredictably connected.</para></listitem>
 
-<para> Replicating a pricing database from a central server to sales
+<listitem><para> Replicating a pricing database from a central server to sales
 staff who connect periodically to grab updates.  </para></listitem>
+
+<listitem><para> Sites where configuration changes are made in a
+haphazard way.</para></listitem>
+
+<listitem><para> A <quote>web hosting</quote> situation where customers can
+independently make arbitrary changes to database schemas is not a good
+candidate for &slony1; usage. </para></listitem>
+
 </itemizedlist></para>
 
 <para> There is also a <link linkend="logshipping">file-based log
 shipping</link> extension where updates would be serialized into
 files.  Given that, log files could be distributed by any means
 desired without any need of feedback between the provider node and
-those nodes subscribing via <quote>log shipping.</quote></para>
+those nodes subscribing via <quote>log shipping.</quote> <quote>Log
+shipped</quote> nodes do not add to the costs of communicating events
+between &slony1; nodes.</para>
 
 <para> But &slony1;, by only having a single origin for each set, is
 quite unsuitable for <emphasis>really</emphasis> asynchronous multiway
@@ -48,10 +58,11 @@
 resolution</quote> akin to what is provided by <productname>Lotus
 <trademark>Notes</trademark></productname> or the
 <quote>syncing</quote> protocols found on PalmOS systems, you will
-really need to look elsewhere.  These sorts of replication models are
-not without merit, but they represent <emphasis>different</emphasis>
-replication scenarios that &slony1; does not attempt to
-address.</para>
+really need to look elsewhere.  </para> 
+
+<para> These other sorts of replication models are not without merit,
+but they represent <emphasis>different</emphasis> replication
+scenarios that &slony1; does not attempt to address.</para>
 
 </sect2>
 
@@ -73,16 +84,17 @@
 node failure, nor to automatically promote a node to a master or other
 data origin.</para>
 
-<para> It is quite possible that you may need to do that; that
-will require that you combine some network tools that evaluate
-<emphasis> to your satisfaction </emphasis> which nodes you consider
+<para> It is quite possible that you may need to do that; that will
+require that you combine some network tools that evaluate <emphasis>
+to your satisfaction </emphasis> which nodes you consider
 <quote>live</quote> and which nodes you consider <quote>dead</quote>
 along with some local policy to determine what to do under those
 circumstances.  &slony1; does not dictate any of that policy to
 you.</para></listitem>
 
-<listitem><para>&slony1; is not multi-master; it is not a connection broker, and
-it doesn't make you coffee and toast in the morning.</para></listitem>
+<listitem><para>&slony1; is not a multi-master replication system; it
+is not a connection broker, and it won't make you coffee and toast in
+the morning.</para></listitem>
 
 </itemizedlist>
 
@@ -97,24 +109,27 @@
 <sect2><title> Why doesn't &slony1; do automatic fail-over/promotion?
 </title>
 
-<para>That is properly the responsibility of network monitoring
-software, not &slony1;.  The configuration, fail-over paths, and
-preferred policies will be different for each site.  For example,
-keep-alive monitoring with redundant NIC's and intelligent HA switches
-that guarantee race-condition-free takeover of a network address and
-disconnecting the <quote>failed</quote> node will vary based on
-network configuration, vendor choices, and the combinations of
-hardware and software in use.  This is clearly the realm of network
-management software and not &slony1;.</para>
+<para>Determining whether a node has <quote>failed</quote> is properly
+the responsibility of network management software, not &slony1;.  The
+configuration, fail-over paths, and preferred policies will be
+different for each site.  For example, keep-alive monitoring with
+redundant NIC's and intelligent HA switches that guarantee
+race-condition-free takeover of a network address and disconnecting
+the <quote>failed</quote> node will vary based on network
+configuration, vendor choices, and the combinations of hardware and
+software in use.  This is clearly in the realm of network management
+and not &slony1;.</para>
 
 <para> Furthermore, choosing what to do based on the
-<quote>shape</quote> of the cluster represents local business policy.
-If &slony1; imposed failover policy on you, that might conflict with
-business requirements, thereby making &slony1; an unacceptable
-choice.</para>
+<quote>shape</quote> of the cluster represents local business policy,
+particularly in view of the fact that <link
+linkend="stmtfailover"><command>FAIL OVER</command></link> requires
+discarding the failed node. If &slony1; imposed failover policy on
+you, that might conflict with business requirements, thereby making
+&slony1; an unacceptable choice.</para>
 
 <para>As a result, let &slony1; do what it does best: provide database
-replication.</para></sect2>
+replication services.</para></sect2>
 
 <sect2><title> Current Limitations</title>
 
@@ -129,10 +144,11 @@
 you submit them as scripts via the <application>slonik</application>
 <xref linkend="stmtddlscript"> operation.  That is not
 <quote>automatic;</quote> you have to construct an SQL DDL script and
-submit it, and there are a number of further caveats.</para>
+submit it, and there are a number of further <link
+linkend="ddlchanges"> caveats.</link></para>
 
 <para>If you have those sorts of requirements, it may be worth
-exploring the use of &postgres; 8.0 <acronym>PITR</acronym> (Point In
+exploring the use of &postgres; 8.X <acronym>PITR</acronym> (Point In
 Time Recovery), where <acronym>WAL</acronym> logs are replicated to
 remote nodes.  Unfortunately, that has two attendant limitations:
 
@@ -215,10 +231,16 @@
 address update, one user, on one node, might update the phone number
 for an address, and another user might update the street address, and
 the conflict resolution system might try to apply these updates in a
-non-conflicting order.</para>
+non-conflicting order.  This can also be considered a form of
+<quote>table partitioning</quote> where a database table is treated as
+consisting of several <quote>sub-tables.</quote> </para>
 
 <para> Conflict resolution systems almost always require some domain
-knowledge of the application being used. </para>
+knowledge of the application being used, which means that they can
+only deal automatically with those conflicts where you have assigned a
+policy.  If they run into conflicts for which no policy is available,
+replication stops until someone applies some manual
+intervention. </para>
 </listitem>
 
 </itemizedlist>
Index: ddlchanges.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v
retrieving revision 1.15
retrieving revision 1.15.2.1
diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.15 -r1.15.2.1
--- doc/adminguide/ddlchanges.sgml
+++ doc/adminguide/ddlchanges.sgml
@@ -4,7 +4,7 @@
 
 <indexterm>
  <primary> DDL changes </primary>
- <secondary> changing the database schema </secondary>
+ <secondary>database schema changes</secondary>
 </indexterm>
 
 <para>When changes are made to the database schema,
@@ -38,9 +38,9 @@
 <listitem><para>The script <emphasis>must not</emphasis> contain
 transaction <command>BEGIN</command> or <command>END</command>
 statements, as the script is already executed inside a transaction.
-In &postgres; version 8, the introduction of nested transactions may
-modify this requirement somewhat, but you must still remain aware that
-the actions in the script are wrapped inside a
+In &postgres; version 8, the introduction of nested transactions
+changes this somewhat, but you must still remain aware that the
+actions in the script are wrapped inside a single
 transaction.</para></listitem>
 
 <listitem><para>If there is <emphasis>anything</emphasis> broken about
@@ -51,7 +51,12 @@
 certainly, fail the second time just as it did the first time.  I have
 found this scenario to lead to a need to go to the
 <quote>master</quote> node to delete the event to stop it from
-continuing to fail.</para></listitem>
+continuing to fail.</para>
+
+<para> The implication of this is that it is
+<emphasis>vital</emphasis> that modifications not be made in a
+haphazard way on one node or another.  The schemas must always stay in
+sync.</para> </listitem>
 
 <listitem><para> For <application>slon</application> to, at that
 point, <quote>panic</quote> is probably the
@@ -75,7 +80,8 @@
 <screen>
 BEGIN;
 LOCK TABLE table_name;
-SELECT _oxrsorg.altertablerestore(tab_id);--tab_id is _slony_schema.sl_table.tab_id
+SELECT _oxrsorg.altertablerestore(tab_id);
+--tab_id is _slony_schema.sl_table.tab_id
 </screen></para>
 
 <para> After the script executes, each table is
@@ -83,7 +89,8 @@
 updates at the origin or that denies updates on subscribers:
 
 <screen>
-SELECT _oxrsorg.altertableforreplication(tab_id);--tab_id is _slony_schema.sl_table.tab_id
+SELECT _oxrsorg.altertableforreplication(tab_id);
+--tab_id is _slony_schema.sl_table.tab_id
 COMMIT;
 </screen></para>
 
@@ -100,7 +107,8 @@
 it into place.</para>
 
 <para> If a particular DDL script only affects one table, it should be
-unnecessary to lock <emphasis>all</emphasis> application tables.</para></listitem>
+unnecessary to lock <emphasis>all</emphasis> application
+tables.</para></listitem>
 
 <listitem><para> You may need to take a brief application outage in
 order to ensure that your applications are not demanding locks that
Index: supportedplatforms.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/supportedplatforms.sgml,v
retrieving revision 1.2.2.1
retrieving revision 1.2.2.2
diff -Ldoc/adminguide/supportedplatforms.sgml -Ldoc/adminguide/supportedplatforms.sgml -u -w -r1.2.2.1 -r1.2.2.2
--- doc/adminguide/supportedplatforms.sgml
+++ doc/adminguide/supportedplatforms.sgml
@@ -35,7 +35,7 @@
       <entry>Intel - 32 bit</entry>
       <entry>Jun 22, 2005</entry>
       <entry>dvdm at truteq.co.za</entry>
-      <entry>PostgreSQL Version: 7.4.6</entry>
+      <entry>&postgres; Version: 7.4.6</entry>
      </row>
 
      <row>
@@ -44,7 +44,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>dvdm at truteq.co.za</entry>
-      <entry>PostgreSQL Version: 7.4.5</entry>
+      <entry>&postgres; Version: 7.4.5</entry>
      </row>
 
      <row>
@@ -53,7 +53,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>cbbrowne at ca.afilias.info</entry>
-      <entry>PostgreSQL Version: 7.4.8</entry>
+      <entry>&postgres; Version: 7.4.8</entry>
      </row>
 
      <row>
@@ -62,7 +62,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>cbbrowne at ca.afilias.info</entry>
-      <entry>PostgreSQL Version: 8.0.2</entry>
+      <entry>&postgres; Version: 8.0.2</entry>
      </row>
 
      <row>
@@ -71,7 +71,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>cbbrowne at ca.afilias.info</entry>
-      <entry>PostgreSQL Version: 7.3.9</entry>
+      <entry>&postgres; Version: 7.3.9</entry>
      </row>
 
      <row>
@@ -80,7 +80,7 @@
       <entry>PPC</entry>
       <entry>Jun 22, 2005</entry>
       <entry>cbbrowne at ca.afilias.info</entry>
-      <entry>PostgreSQL Version: 7.4.8</entry>
+      <entry>&postgres; Version: 7.4.8</entry>
      </row>
 
      <row>
@@ -89,7 +89,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>cbbrowne at ca.afilias.info</entry>
-      <entry>PostgreSQL Version: 7.4.8</entry>
+      <entry>&postgres; Version: 7.4.8</entry>
      </row>
 
      <row>
@@ -98,7 +98,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>devrim at gunduz.org</entry>
-      <entry>PostgreSQL Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
+      <entry>&postgres; Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
      </row>
 
      <row>
@@ -107,7 +107,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>devrim at gunduz.org</entry>
-      <entry>PostgreSQL Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
+      <entry>&postgres; Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
      </row>
 
      <row>
@@ -116,7 +116,7 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>devrim at gunduz.org</entry>
-      <entry>PostgreSQL Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
+      <entry>&postgres; Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
      </row>
 
      <row>
@@ -125,7 +125,16 @@
       <entry>x86</entry>
       <entry>Jun 22, 2005</entry>
       <entry>devrim at gunduz.org</entry>
-      <entry>PostgreSQL Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
+      <entry>&postgres; Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
+     </row>
+
+     <row>
+      <entry>Red Hat Linux</entry>
+      <entry>9</entry>
+      <entry>x86</entry>
+      <entry>Jul 14, 2005</entry>
+      <entry>devrim at gunduz.org</entry>
+      <entry>&postgres; Version: 8.0.3 , docs fail to build, NAMELEN value must be increased from 44 to 100 to build the docs, or use community RPMs</entry>
      </row>
 
      <row>
@@ -134,7 +143,7 @@
       <entry>AMD64</entry>
       <entry>Jul 01, 2005</entry>
       <entry>stefan at kaltenbrunner.cc </entry>
-      <entry>PostgreSQL Version: 8.0.3</entry>
+      <entry>&postgres; Version: 8.0.3</entry>
      </row>
 
      <row>
@@ -143,7 +152,7 @@
       <entry>Sparc64</entry>
       <entry>Jul 01, 2005</entry>
       <entry>stefan at kaltenbrunner.cc </entry>
-      <entry>PostgreSQL Version: 8.0.3</entry>
+      <entry>&postgres; Version: 8.0.3</entry>
      </row>
      <row>
       <entry>OpenBSD</entry>
@@ -151,10 +160,9 @@
       <entry>x86</entry>
       <entry>Jul 01, 2005</entry>
       <entry>stefan at kaltenbrunner.cc </entry>
-      <entry>PostgreSQL Version: 8.0.3</entry>
+      <entry>&postgres; Version: 8.0.3</entry>
      </row>
 
-
  </tbody>
    </tgroup>
   </table>
Index: failover.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.15
retrieving revision 1.15.2.1
diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.15 -r1.15.2.1
--- doc/adminguide/failover.sgml
+++ doc/adminguide/failover.sgml
@@ -135,6 +135,7 @@
 now use node2 as data provider for the set.  This means that after the
 failover command succeeded, no node in the entire replication setup
 will receive anything from node1 any more.</para>
+
 </listitem>
 
 <listitem>
@@ -143,9 +144,16 @@
 node2.</para>
 </listitem>
 
-<listitem>
-<para> After the failover is complete and node2 accepts write
-operations against the tables, remove all remnants of node1's
+<listitem> <para> Purge out the abandoned node </para>
+
+<para> You will find, after the failover, that there are still a full
+set of references to node 1 in <xref linkend="table.sl-node">, as well
+as in referring tables such as <xref linkend="table.sl-confirm">;
+since data in <xref linkend="table.sl-log-1"> is still present,
+&slony1; cannot immediately purge out the node. </para>
+
+<para> After the failover is complete and node2 accepts
+write operations against the tables, remove all remnants of node1's
 configuration information with the <xref linkend="stmtdropnode">
 command:
 
@@ -153,18 +161,76 @@
 drop node (id = 1, event node = 2);
 </programlisting>
 </para>
+
+<para> Supposing the failure resulted from some catastrophic failure
+of the hardware supporting node 1, there might be no
+<quote>remains</quote> left to look at.  If the failure was not
+<quote>total</quote>, as might be the case if the node had to be
+abandoned due to a network communications failure, you will find that
+node 1 still <quote>imagines</quote> itself to be as it was before the
+failure.  See <xref linkend="rebuildnode1"> for more details on the
+implications.</para>
+
 </listitem>
 </itemizedlist>
+
+</sect2>
+
+<sect2><title> Automating <command> FAIL OVER </command> </title>
+
+<para> If you do choose to automate <command>FAIL OVER </command>, it
+is important to do so <emphasis>carefully.</emphasis> You need to have
+good assurance that the failed node is well and truly failed, and you
+need to be able to assure that the failed node will not accidentally
+return into service, thereby allowing there to be two nodes out there
+able to respond in a <quote>master</quote> role. </para>
+
+<para> When failover occurs, there needs to be a mechanism to forcibly
+knock the failed node off the network.  This could take place via
+having an SNMP interface that does some combination of the following:
+
+<itemizedlist>
+
+<listitem><para> Turns off power on the failed server. </para> 
+
+<para> If care is not taken, the server may reappear when system
+administrators power it up. </para>
+
+</listitem>
+
+<listitem><para> Modify firewall rules or other network configuration
+to drop the failed server's IP address from the network. </para>
+
+<para> If the server has multiple network interfaces, and therefore
+multiple IP addresses, this approach allows the
+<quote>application</quote> addresses to be dropped/deactivated, but
+leave <quote>administrative</quote> addresses open so that the server
+would remain accessible to system administrators.  </para> </listitem>
+
+</itemizedlist>
+</para>
 </sect2>
 
-<sect2><title>After Failover, Reconfiguring node1</title>
+<sect2 id="rebuildnode1"><title>After Failover, Reconfiguring
+node 1</title>
 
-<para> After the above failover, the data stored on node1 will rapidly
-become increasingly out of sync with the rest of the nodes, and must
-be treated as corrupt.  Therefore, the only way to get node1 back and
-transfer the origin role back to it is to rebuild it from scratch as a
-subscriber, let it catch up, and then follow the switchover
-procedure.</para>
+<para> What happens to the failed node will depend somewhat on the
+nature of the catastrophe that lead to needing to fail over to another
+node.  If the node had to be abandoned because of physical destruction
+of its disk storage, there will likely not be anything of interest
+left.  On the other hand, a node might be abandoned due to the failure
+of a network connection, in which case the former
+<quote>provider</quote> can appear be functioning perfectly well.
+Nonetheless, once communications are restored, the fact of the
+<command>FAIL OVER</command> makes it mandatory that the failed node
+be abandoned. </para>
+
+<para> After the above failover, the data stored on node 1 will
+rapidly become increasingly out of sync with the rest of the nodes,
+and must be treated as corrupt.  Therefore, the only way to get node 1
+back and transfer the origin role back to it is to rebuild it from
+scratch as a subscriber, let it catch up, and then follow the
+switchover procedure.</para>
 
 <para> A good reason <emphasis>not</emphasis> to do this automatically
 is the fact that important updates (from a
@@ -199,8 +265,8 @@
 network monitoring tool.  You need to have clear methods of
 communicating to applications and users what database hosts are to be
 used.  If those methods are lacking, adding replication to the mix
-will worsen the potential for confusion, and failover will be the
-point at which there is the greatest potential for confusion. </para>
+will worsen the potential for confusion, and failover will be a point
+at which there is enormous potential for confusion. </para>
 </warning>
 
 <para> If the database is very large, it may take many hours to
Index: installation.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v
retrieving revision 1.13.2.1
retrieving revision 1.13.2.2
diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.13.2.1 -r1.13.2.2
--- doc/adminguide/installation.sgml
+++ doc/adminguide/installation.sgml
@@ -2,6 +2,12 @@
 <sect1 id="installation">
 <title>&slony1; Installation</title>
 
+<para>Note for &windows; users: Unless you are planning on hacking the
+&slony1; code, it is highly recommended that you download and install
+a prebuilt binary distribution and jump straight to the configuration
+section below.
+</para>
+
 <para>You should have obtained the &slony1; source from the previous
 step. Unpack it.</para>
 
@@ -56,6 +62,10 @@
 <sect2>
 <title>Example</title>
 
+<para> After determining that the &postgres; instance to be used is
+installed in
+<filename>/opt/dbs/pgsql746-aix-2005-04-01</filename>:</para>
+
 <screen>
 PGMAIN=/opt/dbs/pgsql746-aix-2005-04-01 \
 ./configure \
@@ -81,6 +91,9 @@
 </ulink> Similar patches may need to be constructed for other
 versions; see the FAQ entry on <link linkend="threadsafety"> thread
 safety </link>. </para>
+
+<para> For a full listing of configuration options, run the command
+<command>./configure --help</command>.</para>
 </sect2>
 
 <sect2>
@@ -115,11 +128,12 @@
 gmake install
 </command></para>
 
-<para>This will install files into postgresql install directory as
-specified by the <option>--prefix</option> option used in the
-&postgres; installation.  Make sure you have appropriate permissions
-to write into that area.  Normally you need to do this either as root
-or as the postgres user.  </para>
+<para>This will install files into the postgresql install directory as
+specified by the <command>configure</command>
+<option>--prefix</option> option used in the &postgres; installation.
+Make sure you have appropriate permissions to write into that area.
+Commonly you need to do this either as root or as the postgres user.
+</para>
 </sect2>
 
 <sect2>
@@ -134,8 +148,8 @@
 http://developer.PostgreSQL.org/~devrim/slony . Please read <command>
 CURRENT_MAINTAINER</command> file for the details of the RPMs.
 Please note that the RPMs will look for RPM installation of
-PostgreSQL, so if you install PosgtgreSQL from source, you should
-manually ignore the RPM dependencies related to PostgreSQL.</para>
+&postgres;, so if you install &postgres; from source, you should 
+manually ignore the RPM dependencies related to &postgres;.</para>
 
 <para>Installing &slony1; using these RPMs is as easy as
 installing any RPM.</para>
@@ -153,6 +167,40 @@
 /usr/share/doc/postgresql-slony1-engine.</para>
 
 </sect2>
+
+<sect2>
+<title> Installing the &slony1; service on &windows;</title>
+
+<para> On &windows; systems, instead of running one slon daemon per
+node, a single slon service is installed which can then be controlled
+through the <command>Services</command> control panel applet, or from
+a command prompt using the <command>net</command> command.</para>
+
+<screen>
+C:\Program Files\PostgreSQL\8.0\bin> slon -regservice my_slon
+Service registered.
+Before you can run Slony, you must also register an engine!
+
+WARNING! Service is registered to run as Local System. You are
+encouraged to change this to a low privilege account to increase
+system security. 
+</screen>
+
+<para> Once the service is installed, individual nodes can be setup
+by registering slon configuration files with the service.</para>
+
+<screen>
+C:\Program Files\PostgreSQL\8.0\bin> slon -addengine c:\node1.conf
+Engine added.
+</screen>
+
+<para>Other, self explanatory commands include <command>slon -unregservice 
+&lt;service name&gt;</command>, <command>slon -listengines 
+&lt;service name&gt;</command> and <command>slon -delengine 
+&lt;service name&gt; &lt;config file&gt;</command>.</para> 
+
+</sect2>
+
 </sect1>
 
 <!-- Keep this comment at the end of the file
Index: slonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.sgml,v
retrieving revision 1.13
retrieving revision 1.13.2.1
diff -Ldoc/adminguide/slonik.sgml -Ldoc/adminguide/slonik.sgml -u -w -r1.13 -r1.13.2.1
--- doc/adminguide/slonik.sgml
+++ doc/adminguide/slonik.sgml
@@ -66,7 +66,8 @@
   these sorts of scripting languages already have perfectly good ways
   of managing variables, doing iteration, and such.</para>
   
-  <para>See also <xref linkend="slonikref">. </para>
+  <para>See also <link linkend="slonikref"> Slonik Command Language
+  reference </link>. </para>
 
  </refsect1>
 
Index: plainpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/plainpaths.sgml,v
retrieving revision 1.8
retrieving revision 1.8.2.1
diff -Ldoc/adminguide/plainpaths.sgml -Ldoc/adminguide/plainpaths.sgml -u -w -r1.8 -r1.8.2.1
--- doc/adminguide/plainpaths.sgml
+++ doc/adminguide/plainpaths.sgml
@@ -1,7 +1,7 @@
 <!-- $Id$ -->
 <sect1 id="plainpaths"><title> &slony1; Path Communications</title>
 
-<para> &slony1; uses &postgres; DSNs in two contexts to establish
+<para> &slony1; uses &postgres; DSNs in three contexts to establish
 access to databases:
 
 <itemizedlist>
@@ -24,6 +24,12 @@
 connections using <link linkend="tunnelling">SSH
 tunnelling</link>.</para></listitem>
 
+<listitem><para> The <xref linkend="slon"> DSN parameter. </para> 
+
+<para> The DSN parameter passed to each <xref linkend="slon">
+indicates what network path should be used to get from the slon
+process to the database that it manages.</para> </listitem>
+
 <listitem><para> <xref linkend="stmtstorepath"> - controlling how
 <xref linkend="slon"> daemons communicate with remote nodes.  These
 paths are stored in <xref linkend="table.sl-path">.</para>
@@ -79,8 +85,8 @@
 would doubtless prefer for them to be useful, and that can certainly
 be the case.  If the primary site is being used for
 <quote>transactional activities,</quote> the replicas at the secondary
-site may be used for running time-oriented reports that are allowed to
-be a little bit behind.</para></listitem>
+site may be used for running time-oriented reports that do not require
+up-to-the second data.</para></listitem>
 
 <listitem><para> The symmetry of the configuration means that if you
 had <emphasis>two</emphasis> transactional applications needing
Index: logshipping.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/logshipping.sgml,v
retrieving revision 1.9.2.1
retrieving revision 1.9.2.2
diff -Ldoc/adminguide/logshipping.sgml -Ldoc/adminguide/logshipping.sgml -u -w -r1.9.2.1 -r1.9.2.2
--- doc/adminguide/logshipping.sgml
+++ doc/adminguide/logshipping.sgml
@@ -1,6 +1,7 @@
 <!-- $Id$ -->
 <sect1 id="logshipping">
 <title>Log Shipping - &slony1; with Files</title>
+<indexterm><primary>log shipping</primary></indexterm>
 
 <para> One of the new features for 1.1 is the ability to serialize the
 updates to go out into log files that can be kept in a spool
@@ -118,22 +119,10 @@
 entirety of the traffic going to a subscriber.  You cannot separate
 things out if there are multiple replication sets.  </para></answer>
 
-<answer><para> The <quote>log shipping node</quote> presently tracks
-only <command>SYNC</command> events.  This should be sufficient to cope with
-<emphasis>some</emphasis> changes in cluster configuration, but not
-others.  </para>
-
-<para> Log shipping does <emphasis>not</emphasis> process certain
-additional events, with the implication that the introduction of any
-of the following events can invalidate the relationship between the
-<command>SYNC</command>s and the dump created using
-<application>slony1_dump.sh</application> so that you'll likely need
-to rerun <application>slony1_dump.sh</application>:
-
-<itemizedlist>
-<listitem><para><command> SUBSCRIBE_SET </command></para></listitem> 
-
-</itemizedlist></para>
+<answer><para> The <quote>log shipping node</quote> presently only
+fully tracks <command>SYNC</command> events.  This should be
+sufficient to cope with <emphasis>some</emphasis> changes in cluster
+configuration, but not others.  </para>
 
 <para> A number of event types <emphasis> are </emphasis> handled in
 such a way that log shipping copes with them:
@@ -159,6 +148,8 @@
 <command>MERGE_SET</command>, will be handled
 <quote>appropriately</quote>.</para></listitem>
 
+<listitem><para><command> SUBSCRIBE_SET </command></para></listitem> 
+
 <listitem><para> The various events involved in node configuration are
 irrelevant to log shipping:
 
@@ -205,13 +196,15 @@
 <itemizedlist>
 
 <listitem><para> You <emphasis>don't</emphasis> want to blindly apply
-<command>SYNC</command> files because any given <command>SYNC</command> file may <emphasis>not</emphasis> be
-the right one.  If it's wrong, then the result will be that the call
-to <function> setsyncTracking_offline() </function> will fail, and
-your <application> psql</application> session will <command> ABORT
-</command>, and then run through the remainder of that <command>SYNC</command> file
-looking for a <command>COMMIT</command> or <command>ROLLBACK</command>
-so that it can try to move on to the next transaction.</para>
+<command>SYNC</command> files because any given
+<command>SYNC</command> file may <emphasis>not</emphasis> be the right
+one.  If it's wrong, then the result will be that the call to
+<function> setsyncTracking_offline() </function> will fail, and your
+<application> psql</application> session will <command> ABORT
+</command>, and then run through the remainder of that
+<command>SYNC</command> file looking for a <command>COMMIT</command>
+or <command>ROLLBACK</command> so that it can try to move on to the
+next transaction.</para>
 
 <para> But we <emphasis> know </emphasis> that the entire remainder of
 the file will fail!  It is futile to go through the parsing effort of
@@ -222,7 +215,8 @@
 <itemizedlist>
 
 <listitem><para> Read the first few lines of the file, up to and
-including the <function> setsyncTracking_offline() </function> call.</para></listitem>  
+including the <function> setsyncTracking_offline() </function>
+call.</para></listitem>
 
 <listitem><para> Try to apply it that far.</para></listitem>
 
Index: slony.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v
retrieving revision 1.20.2.1
retrieving revision 1.20.2.2
diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.20.2.1 -r1.20.2.2
--- doc/adminguide/slony.sgml
+++ doc/adminguide/slony.sgml
@@ -9,6 +9,41 @@
   <!ENTITY slony1 "<PRODUCTNAME>Slony-I</PRODUCTNAME>">
   <!ENTITY postgres "<PRODUCTNAME>PostgreSQL</PRODUCTNAME>">
   <!ENTITY windows "<trademark>Windows</trademark>">
+  <!ENTITY logship "<link linkend=logshipping>log shipping</link>">
+  <!ENTITY rlocking "<link linkend=locking> locking </link>">
+  <!ENTITY rddlchanges "<xref linkend=ddlchanges>"> 
+  <!ENTITY fundroplisten "<xref linkend=function.droplisten-integer-integer-integer>">
+  <!ENTITY fundropset "<xref linkend= function.dropset-integer>"> 
+  <!ENTITY funmergeset "<xref linkend= function.mergeset-integer-integer>"> 
+  <!ENTITY funsetdroptable "<xref linkend= function.setdroptable-integer>">
+  <!ENTITY funstorelisten "<xref linkend= function.storelisten-integer-integer-integer>">
+  <!ENTITY funstorepath "<xref linkend=function.storepath-integer-integer-text-integer>">
+  <!ENTITY funstoreset "<xref linkend=function.storeset-integer-text>">
+  <!ENTITY funtableaddkey "<xref linkend= function.tableaddkey-text>">
+  <!ENTITY funsetaddtable "<xref linkend= function.setaddtable-integer-integer-text-name-text>">
+  <!ENTITY funsetaddsequence "<xref linkend= function.setaddsequence-integer-integer-text-text>">
+  <!ENTITY funsetdropsequence "<xref linkend= function.setdropsequence-integer>">
+  <!ENTITY funsetmovetable "<xref linkend= function.setmovetable-integer-integer>">
+<!ENTITY fundroptrigger "<xref linkend=function.droptrigger-integer-name>">
+<!ENTITY funddlscript "<xref linkend=function.ddlscript-integer-text-integer>">
+<!ENTITY fundropnode "<xref linkend=function.dropnode-integer>">
+<!ENTITY funenablenode "<xref linkend=function.enablenode-integer>">
+<!ENTITY funfailednode "<xref linkend=function.failednode-integer-integer>">
+<!ENTITY funinitializelocalnode "<xref linkend=function.initializelocalnode-integer-text>">
+<!ENTITY funlockset "<xref linkend=function.lockset-integer>">
+<!ENTITY funmoveset "<xref linkend=function.moveset-integer-integer>">
+<!ENTITY funsetmovesequence "<xref linkend=function.setmovesequence-integer-integer>">
+<!ENTITY funstoretrigger "<xref linkend=function.storetrigger-integer-name>">
+<!ENTITY funsubscribeset "<xref linkend=function.subscribeset-integer-integer-integer-boolean>">
+<!ENTITY fununinstallnode "<xref linkend=function.uninstallnode>">
+<!ENTITY fununlockset "<xref linkend=function.unlockset-integer>">
+<!ENTITY fununsubscribeset "<xref linkend=function.unsubscribeset-integer-integer>">
+  <!ENTITY rmissingoids "<link linkend=missingoids>error messages indicating missing OIDs</link>">
+  <!ENTITY slnode "<xref linkend=table.sl-node>">
+  <!ENTITY slconfirm "<xref linkend=table.sl-confirm>">
+  <!ENTITY rplainpaths "<xref linkend=plainpaths>">
+  <!ENTITY rlistenpaths "<xref linkend=listenpaths>">
+
 ]>
 
 <book id="slony">
Index: locking.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/locking.sgml,v
retrieving revision 1.2
retrieving revision 1.2.2.1
diff -Ldoc/adminguide/locking.sgml -Ldoc/adminguide/locking.sgml -u -w -r1.2 -r1.2.2.1
--- doc/adminguide/locking.sgml
+++ doc/adminguide/locking.sgml
@@ -27,21 +27,22 @@
 
 <para> A momentary table lock must be acquired on the
 <quote>origin</quote> node in order to add the trigger that collects
-updates for that table.</para>
+updates for that table.  It only needs to be acquired long enough to
+establish the new trigger.</para>
 </listitem>
 
 <listitem><para><link linkend="stmtmoveset"> <command> move
 set</command> </link></para>
 
 <para> When a set origin is shifted from one node to another, locks
-must be acquired on the tables on both the old origin and the new
-origin in order to change the triggers on the tables.
+must be acquired on each of the tables on both the old origin and the
+new origin in order to change the triggers on the tables.
 </para></listitem>
 
 <listitem><para><link linkend="stmtlockset"> <command> lock set
 </command> </link> </para>
 
-<para> This operation expressly requests locks on the tables in a
+<para> This operation expressly requests locks on each of the tables in a
 given replication set on the origin node.</para>
 </listitem>
 
@@ -61,7 +62,7 @@
 <para> In a sense, this is the least provocative scenario, since,
 before the replication set has been populated, it is pretty reasonable
 to say that the node is <quote>unusable</quote> and that &slony1;
-could reasonably expect exclusive access to the node. </para>
+could reasonably demand exclusive access to the node. </para>
 </listitem>
 
 </itemizedlist>
@@ -116,7 +117,8 @@
 the <link linkend="slonyuser"> <command>slony</command> user </link>
 will have access to the database. </para> </listitem>
 
-<listitem><para> Issue a <command>kill -SIGHUP</command> to the &postgres;  postmaster.</para> 
+<listitem><para> Issue a <command>kill -SIGHUP</command> to the
+&postgres; postmaster.</para>
 
 <para> This will not kill off existing possibly-long-running queries,
 but will prevent new ones from coming in.  There is an application
@@ -135,7 +137,7 @@
 attach to the node. </para> 
 
 <para> At that point, it will be safe to proceed with the &slony1;
-operation.</para></listitem>
+operation; there will be no competing processes.</para></listitem>
 
 <listitem><para> Reset <filename>pg_hba.conf</filename> to allow other
 users in, and <command>kill -SIGHUP</command> the postmaster to make
@@ -145,11 +147,10 @@
 </para>
 </listitem>
 
-<listitem><para> The section on <link linkend="ddlchanges"> DDL
-Changes </link> suggests some additional techniques that may be
-useful, such as moving tables between replication sets in such a way
-that you minimize the set of tables that need to be
-locked. </para></listitem>
+<listitem><para> The section  &rddlchanges; suggests some additional
+techniques that may be useful, such as moving tables between
+replication sets in such a way that you minimize the set of tables
+that need to be locked. </para></listitem>
 
 </itemizedlist>
 


More information about the Slony1-commit mailing list