CVS User Account cvsuser
Fri Feb 18 22:43:41 PST 2005
Log Message:
-----------
Eliminated most <link> tags in favor of (much simpler!) <xrefs>

Added a number of links into the schema docs

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.sgml (r1.10 -> r1.11)
        adminscripts.sgml (r1.17 -> r1.18)
        concepts.sgml (r1.10 -> r1.11)
        ddlchanges.sgml (r1.12 -> r1.13)
        defineset.sgml (r1.11 -> r1.12)
        dropthings.sgml (r1.11 -> r1.12)
        failover.sgml (r1.11 -> r1.12)
        faq.sgml (r1.21 -> r1.22)
        firstdb.sgml (r1.10 -> r1.11)
        intro.sgml (r1.10 -> r1.11)
        listenpaths.sgml (r1.12 -> r1.13)
        maintenance.sgml (r1.11 -> r1.12)
        monitoring.sgml (r1.13 -> r1.14)
        plainpaths.sgml (r1.3 -> r1.4)
        prerequisites.sgml (r1.12 -> r1.13)
        reshape.sgml (r1.12 -> r1.13)
        schemadoc.xml (r1.3 -> r1.4)
        slon.sgml (r1.11 -> r1.12)
        slonik.sgml (r1.11 -> r1.12)
        slonik_ref.sgml (r1.13 -> r1.14)
        startslons.sgml (r1.9 -> r1.10)
        subscribenodes.sgml (r1.10 -> r1.11)
        usingslonik.sgml (r1.4 -> r1.5)
        versionupgrade.sgml (r1.3 -> r1.4)

-------------- next part --------------
Index: versionupgrade.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/versionupgrade.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/versionupgrade.sgml -Ldoc/adminguide/versionupgrade.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/versionupgrade.sgml
+++ doc/adminguide/versionupgrade.sgml
@@ -40,10 +40,11 @@
 <listitem><para> Stop all applications that might modify the data</para></listitem>
 
 <listitem><para> Lock the set against client application updates using
-<link linkend="stmtlockset"><command>LOCK SET</command></link></para></listitem>
+<xref linkend="stmtlockset"></para></listitem>
 
-<listitem><para> Submit the Slonik command <link linkend="stmtmoveset">eset"><command>MOVE SET</command></link> to shift the
-origin from the old database to the new one</para></listitem>
+<listitem><para> Submit the Slonik command <xref
+linkend="stmtmoveset"> to shift the origin from the old database to
+the new one</para></listitem>
 
 <listitem><para> Point the applications at the new database</para></listitem>
 </itemizedlist></para>
@@ -57,9 +58,9 @@
 <para> Note that after the origin has been shifted, updates now flow
 into the <emphasis>old</emphasis> database.  If you discover that due
 to some unforeseen, untested condition, your application is somehow
-unhappy connecting to the new database, you could easily use <link
-   linkend="stmtmoveset"><command>MOVE SET</command></link> again to
-shift the origin back to the old database.</para>
+unhappy connecting to the new database, you could easily use <xref
+linkend="stmtmoveset"> again to shift the origin back to the old
+database.</para>
 
 <para> If you consider it particularly vital to be able to shift back
 to the old database in its state at the time of the changeover, so as
@@ -87,13 +88,12 @@
 that has been feeding the subscriber running the old version of
 &postgres;</para>
 
-<para> You may want to use <link linkend="stmtuninstallnode">
-<command>UNINSTALL NODE</command></link> to decommission this node,
-making it into a standalone database, or merely kill the
-<application>slon</application>, depending on how permanent you want
-this all to be.</para></listitem>
+<para> You may want to use <xref linkend="stmtuninstallnode"> to
+decommission this node, making it into a standalone database, or
+merely kill the <application>slon</application>, depending on how
+permanent you want this all to be.</para></listitem>
 
-<listitem><para> Then use <command>MOVE SET</command> to shift the
+<listitem><para> Then use <xref linkend="stmtmoveset"> to shift the
 origin, as before.</para></listitem>
 
 </itemizedlist></para>
Index: slon.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slon.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/slon.sgml -Ldoc/adminguide/slon.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/slon.sgml
+++ doc/adminguide/slon.sgml
@@ -1,18 +1,18 @@
 <!-- $Id$ -->
-<refentry id="app-slon">
+<refentry id="slon">
  <refmeta>
   <refentrytitle id="app-slon-title"><application>slon</application></refentrytitle>
   <manvolnum>1</manvolnum>
   <refmiscinfo>Application</refmiscinfo>
  </refmeta>
  <refnamediv>
-  <refname><application id="slon">slon</application></refname>
+  <refname><application>slon</application></refname>
   <refpurpose>
-   <productname>Slony-I</productname> daemon
+   &slony1; daemon
   </refpurpose>
  </refnamediv>
  
- <indexterm zone="app-slon">
+ <indexterm zone="slon">
   <primary>slon</primary>
  </indexterm>
  
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -29,10 +29,9 @@
 <itemizedlist>
 
 <listitem><para> If the table has a formally identified primary key,
-<command><link linkend="stmtsetaddtable">SET ADD
-TABLE</link></command> can be used without any need to reference the
-primary key.  &slony1; will pick up that
-there is a primary key, and use it.</para></listitem>
+<xref linkend="stmtsetaddtable"> can be used without any need to
+reference the primary key.  &slony1; will pick up that there is a
+primary key, and use it.</para></listitem>
 
 <listitem><para> If the table hasn't got a primary key, but has some
 <emphasis>candidate</emphasis> primary key, that is, some index on a
@@ -52,10 +51,9 @@
 
 <listitem><para> If the table hasn't even got a candidate primary key,
 you can ask &slony1; to provide one.  This is done by first using
-<command><link linkend="stmttableaddkey"> TABLE ADD KEY </link>
-</command> to add a column populated using a &slony1; sequence, and
-then having the <command> <link linkend="stmtsetaddtable"> SET ADD
-TABLE</link></command> include the directive
+<xref linkend="stmttableaddkey"> to add a column populated using a
+&slony1; sequence, and then having the <xref
+linkend="stmtsetaddtable"> include the directive
 <option>key=serial</option>, to indicate that &slony1;'s own column
 should be used.</para></listitem>
 
@@ -90,32 +88,31 @@
 
 <listitem><para> Replicating a large set leads to a <link
        linkend="longtxnsareevil"> long running transaction </link> on the
-provider node.  The <link linkend="faq"> FAQ </link> outlines a number
-of problems that result from long running transactions that will
-injure system performance.</para>
+provider node.  <xref linkend="faq"> outlines a number of problems
+that result from long running transactions that will injure system
+performance.</para>
 
 <para> If you can split a large set into several pieces, that will
 shorten the length of each of the transactions, lessening the degree
 of <quote>injury</quote> to performance.</para></listitem>
 
-<listitem><para> Any time you invoke <link linkend="stmtddlscript">
-<command> EXECUTE SCRIPT </command></link>, this requests a lock on
-<emphasis> every single table in the replication set. </emphasis></para>
+<listitem><para> Any time you invoke <xref linkend="stmtddlscript">,
+this requests a lock on <emphasis> every single table in the
+replication set. </emphasis></para>
 
 <para> There have been reports <quote>in the field</quote> of this
-leading to deadlocks such that the <link linkend="stmtddlscript">
-<command> EXECUTE SCRIPT </command></link> request had to be submitted
-many times in order for it to actually complete successfully.</para>
+leading to deadlocks such that the <xref linkend="stmtddlscript">
+request had to be submitted many times in order for it to actually
+complete successfully.</para>
 
 <para> The more tables you have in a set, the more tables need to be
 locked, and the greater the chances of deadlocks. </para>
 
 <para> By the same token, if a particular DDL script only needs to
-affect a couple of tables, you might use <link
-       linkend="stmtsetmovetable"> <command>SET MOVE TABLE</command></link>
-to move them temporarily to a new replication set.  By diminishing the
-number of locks needed, this should ease the ability to get the DDL
-change into place.</para>
+affect a couple of tables, you might use <xref
+linkend="stmtsetmovetable"> to move them temporarily to a new
+replication set.  By diminishing the number of locks needed, this
+should ease the ability to get the DDL change into place.</para>
 </listitem>
 
 </itemizedlist></para>
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.17
retrieving revision 1.18
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.17 -r1.18
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -9,16 +9,16 @@
 nodes.</para>
 
 <para>Most of them generate Slonik scripts that are then to be passed
-on to the <link linkend="slonik"><application>slonik</application></link> utility
-to be submitted to all of the &slony1; nodes in a
-particular cluster.  At one time, this embedded running <link
-   linkend="slonik">slonik</link> on the slonik scripts.
+on to the <xref linkend="slonik"> utility to be submitted to all of
+the &slony1; nodes in a particular cluster.  At one time, this
+embedded running <xref linkend="slonik"> on the slonik scripts.
 Unfortunately, this turned out to be a pretty large calibre
-<quote>foot gun</quote>, as minor typos on the command line led, on a couple
-of occasions, to pretty calamitous actions, so the behavior has been
-changed so that the scripts simply submit output to standard output.
-An administrator should review the script <emphasis>before</emphasis> submitting
-it to <link linkend="slonik">slonik</link>.</para>
+<quote>foot gun</quote>, as minor typos on the command line led, on a
+couple of occasions, to pretty calamitous actions, so the behavior has
+been changed so that the scripts simply submit output to standard
+output.  An administrator should review the script
+<emphasis>before</emphasis> submitting it to <xref
+linkend="slonik">.</para>
 
 <sect2><title>Node/Cluster Configuration - cluster.nodes</title>
 
@@ -67,10 +67,10 @@
 objects will be contained in a particular replication set.</para>
 
 <para>Unlike <envar>SLONYNODES</envar>, which is essential for
-<emphasis>all</emphasis> of the <link linkend="slonik">slonik</link>-generating scripts, this only needs to
-be set when running <filename>create_set.pl</filename>, as that is the
-only script used to control what tables will be in a particular
-replication set.</para>
+<emphasis>all</emphasis> of the <xref linkend="slonik">-generating
+scripts, this only needs to be set when running
+<filename>create_set.pl</filename>, as that is the only script used to
+control what tables will be in a particular replication set.</para>
 
 <para>What variables are set up.</para>
 <itemizedlist>
@@ -222,17 +222,12 @@
 <sect2 id="regenlisten"><title>regenerate-listens.pl</title>
 
 <para>This script connects to a &slony1; node, and queries various
-tables (<link linkend="table.sl-set"> <envar>sl_set</envar></link>,
-<link linkend="table.sl-node"> <envar>sl_node</envar></link>, <link
-linkend="table.sl-subscribe"> <envar>sl_subscribe</envar></link>,
-<link linkend="table.sl-path"> <envar>sl_path</envar></link>) to
-compute what <command><link linkend="stmtstorelisten"> STORE
-LISTEN</link></command> requests should be submitted to the
+tables (sl_set, sl_node, sl_subscribe, sl_path) to compute what <xref
+linkend="stmtstorelisten"> requests should be submitted to the
 cluster.</para>
 
-<para> See the documentation on <link linkend="autolisten">Automated
-Listen Path Generation</link> for more details on how this
-works.</para>
+<para> See the documentation in <xref linkend="autolisten"> for more
+details on how this works.</para>
 </sect2>
 
 </sect1>
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.13
retrieving revision 1.14
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.13 -r1.14
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -113,17 +113,18 @@
    <refsect1>
     <title>Description</title>
     <para>
-     Must be the very first command in every
-     <application>slonik</application> script. Defines the namespace
-     in which all <productname>Slony-I</productname> specific
-     functions, procedures, tables and sequences are defined. The
-     namespace name is built by prefixing the given string literal
-     with an underscore. This namespace will be identical in all
-     databases that participate in the same replication group.
+     Must be the very first statement in every
+     <application>slonik</application> script. It defines the
+     namespace in which all <productname>Slony-I</productname>
+     specific functions, procedures, tables and sequences are
+     defined. The namespace name is built by prefixing the given
+     string literal with an underscore. This namespace will be
+     identical in all databases that participate in the same
+     replication group.
     </para>
     
     <para>
-     No user objects are supposed to live in this namespace and the
+     No user objects are supposed to live in this namespace, and the
      namespace is not allowed to exist prior to adding a database to
      the replication system.  Thus, if you add a new node using
      <command> pg_dump -s </command> on a database that is already in
@@ -170,12 +171,12 @@
 
     <para>
      The <application>slonik</application> utility will not try to
-     connect to the databases unless some subsequent command requires
-     the connection.
+     connect to a given database unless some subsequent command
+     requires the connection.
     </para>
 
-    <para>
-     Note: As mentioned in the original documents,
+   <note> <para>
+     As mentioned in the original documents,
      <productname>Slony-I</productname> is designed as an enterprise
      replication system for data centers. It has been assumed
      throughout the entire development that the database servers and
@@ -184,14 +185,22 @@
      schemes like <quote>trust</quote>.  Alternatively, libpq can read
      passwords from <filename> .pgpass </filename>.
     </para>
+   </note>
+   <note>
     <para>
-     Note: If you need to change the DSN information for a node, as
-     would happen if the IP address for a host were to change, you may
-     submit the new information using this command, and that
-     configuration will be propagated.  Existing <application>> slon
-     </application> processes will need to be restarted in order to
-     become aware of the configuration change.
+    If you need to change the DSN information for a node, as would
+    happen if the IP address for a host were to change, you must
+    submit the new information using the <xref
+    linkend="stmtstorepath"> command, and that configuration will be
+    propagated.  Existing <application> slon </application> processes
+    may need to be restarted in order to become aware of the
+    configuration change.
     </para>
+   </note>
+
+   <para> For more details on the distinction between this and <xref
+   linkend="stmtstorepath">, see <xref linkend="plainpaths">.</para>
+
    </Refsect1>
    <Refsect1><Title>Example</Title>
     <Programlisting>
@@ -281,7 +290,9 @@
      new <productname>Slony-I</productname> replication cluster.  The
      initialization process consists of creating the cluster namespace,
      loading all the base tables, functions, procedures and
-     initializing the node.
+    initializing the node, using <xref
+    linkend="function.initializelocalnode-integer-text"> and <xref
+    linkend= "function.enablenode-integer">.
      
      <variablelist>
       <varlistentry><term><literal>ID</literal></term>
@@ -313,6 +324,13 @@
    COMMENT = 'Node 1'
 );
     </programlisting>
+
+   <note> <para> This command functions very similarly to <xref
+   linkend="stmtstorenode">, the difference being that <command>INIT
+   CLUSTER </command> does not need to draw configuration from other
+   existing nodes.
+
+   </para> </note>
    </refsect1>
   </refentry>
 
@@ -646,13 +664,13 @@
 
     <para> Every node in the system must listen for events from every
      other node in the system. As a general rule of thumb, a subscriber
-     (see <link linkend="stmtsubscribeset">SUBSCRIBE SET</link>) should
-     listen for events of the set's origin on the same provider, where
-     it receives the data from. In turn, the origin of the data set
-     should listen for events from the origin in the opposite
-     direction. A node can listen for events from one and the same
-     origin on different providers at the same time. However, to
-     process <command>SYNC</command> events from that origin, all data
+    (see <xref linkend="stmtsubscribeset">) should listen for events
+    of the set's origin on the same provider, where it receives the
+    data from. In turn, the origin of the data set should listen for
+    events from the origin in the opposite direction. A node can
+    listen for events from one and the same origin on different
+    providers at the same time. However, to process
+    <command>SYNC</command> events from that origin, all data
      providers must have the same or higher sync status, so this will
      not result in any faster replication behaviour.
     </para>
@@ -672,7 +690,7 @@
      </varlistentry>
     </variablelist>
 
-    <para> For more details, see the section on <link linkend="listenpaths"> Slony-I Listen Paths. </link></para>
+    <para> For more details, see <xref linkend="listenpaths">.</para>
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
@@ -758,8 +776,7 @@
     <variablelist>
      <varlistentry><term><literal> NODE ID = ival </literal></term>
       <listitem><para> Node ID of the set origin where the table will be
-	added as a set member. (See <link linkend="stmtsetaddtable">
-	 <command>SET ADD TABLE</command></link>.)</para></listitem>
+	added as a set member. (See <xref linkend="stmtsetaddtable">.)</para></listitem>
      </varlistentry>
      <varlistentry><term><literal> FULLY QUALIFIED NAME  = 'string' </literal></term>
       <listitem><para> The full name of the table consisting of the schema
@@ -950,8 +967,8 @@
     
     <para> Add an existing user table to a replication set. The set
      cannot currently be subscribed by any other node - that
-     functionality is supported by the <command><link
-       linkend="stmtmergeset"> MERGE SET</link> </command> command.
+    functionality is supported by the <xref linkend="stmtmergeset">
+    command.
      
      <variablelist>
       <varlistentry><term><literal> SET ID = ival </literal></term>
@@ -966,8 +983,8 @@
        <listitem><para> Unique ID of the table. These ID's are not
 	 only used to uniquely identify the individual table within the
 	 replication system. The numeric value of this ID also
-	 determines the order in which the tables are locked in a <link
-	  linkend="stmtlockset">LOCK SET</link> command for example. So
+	 determines the order in which the tables are locked in a <xref
+	  linkend="stmtlockset"> command for example. So
 	 these numbers should represent any applicable table hierarchy
 	 to make sure the <application>slonik</application> command
 	 scripts do not deadlock at any critical
@@ -975,15 +992,15 @@
       </varlistentry>
       <varlistentry><term><literal> FULLY QUALIFIED NAME = 'string' </literal></term>
        <listitem><para> The full table name as described in
-	 <link linkend="stmttableaddkey">TABLE ADD KEY</link>.</para></listitem>
+	 <xref linkend="stmttableaddkey">.</para></listitem>
       </varlistentry>
       <varlistentry><term><literal> KEY = { 'string' | SERIAL }
 	</literal></term> <listitem><para>
 	 <emphasis>(Optional)</emphasis> The index name that covers the
 	 unique and not null column set to be used as the row identifier
 	 for replication purposes. Or the keyword SERIAL to use the
-	 special column added with a previous <link
-	  linkend="stmttableaddkey">TABLE ADD KEY</link> command. Default
+	 special column added with a previous <xref
+	  linkend="stmttableaddkey"> command. Default
 	 is to use the table's primary key.  The index name is <emphasis>
 	  not </emphasis> fully qualified; you must omit the
 	 namespace.</para></listitem>
@@ -1029,8 +1046,8 @@
     <para>
      Add an existing user sequence to a replication set. The set
      cannot currently be subscribed by any other node - that
-     functionality is supported by the <command><link
-       linkend="stmtmergeset"> MERGE SET</link> </command> command.
+     functionality is supported by the <xref linkend="stmtmergeset">
+     command.
      
      <variablelist>
       <varlistentry><term><literal> SET ID = ival </literal></term>
@@ -1053,7 +1070,7 @@
       </varlistentry>
       <varlistentry><term><literal> FULLY QUALIFIED NAME = 'string' </literal></term>
        <listitem><para> The full sequence name as described in
-	 <link linkend="stmttableaddkey">TABLE ADD KEY</link>.</para></listitem>
+	 < linkend="stmttableaddkey">.</para></listitem>
       </varlistentry>
       <varlistentry><term><literal> COMMENT = 'string' </literal></term>
        <listitem><para> A descriptive text added to the sequence entry.  </para></listitem>
@@ -1096,9 +1113,9 @@
      Drop a table from a replication set.
     </para>
     <para>
-     Note that this action will <emphasis> not </emphasis> drop a candidate
-     primary key created using <link linkend="stmttableaddkey"> <command> TABLE ADD KEY
-      </command></link>.
+     Note that this action will <emphasis> not </emphasis> drop a
+     candidate primary key created using <xref
+     linkend="stmttableaddkey">.
      
      <variablelist>
       <varlistentry><term><literal> ORIGIN = ival </literal></term>
@@ -1551,8 +1568,7 @@
     <title>Description</title>
     
     <para> Guards a replication set against client application updates
-     in preparation for a <link linkend="stmtmoveset">MOVE SET</link>
-     command.
+    in preparation for a <xref linkend="stmtmoveset"> command.
     </para>
 
     <para> This command must be the first in a possible statement
@@ -1728,16 +1744,15 @@
      After successful failover, all former direct subscribers of the
      failed node become direct subscribers of the backup node. The
      failed node is abandoned, and can and should be removed from the
-     configuration with <command><link linkend="stmtdropnode"> DROP
-     NODE</link> </command>.
+     configuration with <xref linkend="stmtdropnode">.
     </para>
     
     <warning><para> This command will abandon the status of the failed
       node.  There is no possibility to let the failed node join the
       cluster again without rebuilding it from scratch as a slave.  If
-      at all possible, you would likely prefer to use <command> <link
-	linkend="stmtmoveset"> MOVE SET </link> </command> instead, as
-      that does <emphasis>not</emphasis> abandon the failed node.
+    at all possible, you would likely prefer to use <xref
+    linkend="stmtmoveset"> instead, as that does
+    <emphasis>not</emphasis> abandon the failed node.
     </para></warning>
     
     <variablelist>
@@ -1823,8 +1838,7 @@
      </varlistentry>
     </variablelist>
     
-    <para> See also the warnings in <link linkend="ddlchanges">
-    Database Schema Changes (DDL)</link>.</para>
+    <para> See also the warnings in <xref linkend="ddlchanges">.</para>
 
     <para> Note that at the start of this event, all tables in the
     specified set are unlocked via the function
@@ -1875,9 +1889,8 @@
      by earlier calls are currently not checked). In certain situations
      it is necessary that events generated on one node (such as
      <command>CREATE SET</command>) are processed on another node
-     before issuing more commands (for instance, <link
-      linkend="stmtsubscribeset"><command>SUBSCRIBE
-       SET</command></link>).  <command>WAIT FOR EVENT</command> may be
+     before issuing more commands (for instance, <xref
+      linkend="stmtsubscribeset">).  <command>WAIT FOR EVENT</command> may be
      used to cause the <application>slonik</application> script to wait
      until the subscriber node is ready for the next action.
     </para>
Index: schemadoc.xml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/schemadoc.xml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/schemadoc.xml -Ldoc/adminguide/schemadoc.xml -u -w -r1.3 -r1.4
--- doc/adminguide/schemadoc.xml
+++ doc/adminguide/schemadoc.xml
@@ -1,10 +1,5 @@
 <!-- $Header$ -->
 
-
-
-
-
-
   <chapter id="schema"
            xreflabel="schemadoc">
     <title>Schema schemadoc</title>
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.10
retrieving revision 1.11
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.10 -r1.11
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -38,9 +38,9 @@
  NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';
 </programlisting>
 
-<para>The <link linkend="admconninfo"><command>CONNINFO</command></link>
-information indicates a string argument that will ultimately be passed
-to the <function>PQconnectdb()</function> libpq function.</para>
+<para>The <xref linkend="admconninfo"> information indicates a string
+argument that will ultimately be passed to the
+<function>PQconnectdb()</function> libpq function.</para>
 
 <para>Thus, a &slony1; cluster consists of:</para>
 <itemizedlist>
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -142,9 +142,9 @@
 
 <sect2><title> Log Files</title>
 
-<para><link linkend="slon"> <application>slon</application></link> daemons
-generate some more-or-less verbose log files, depending on what
-debugging level is turned on.  You might assortedly wish to:
+<para><xref linkend="slon"> daemons generate some more-or-less verbose
+log files, depending on what debugging level is turned on.  You might
+assortedly wish to:
 
 <itemizedlist>
 
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.10
retrieving revision 1.11
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.10 -r1.11
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -27,8 +27,8 @@
 
 <para> The <envar>REPLICATIONUSER</envar> needs to be a &postgres;
 superuser.  This is typically postgres or pgsql, although in complex
-environments it is quite likely a good idea to define a &slony1; user
-to distinguish between them.</para>
+environments it is quite likely a good idea to define a
+<command>slony</command> user to distinguish between the roles.</para>
 
 <para>You should also set the following shell variables:
 
@@ -125,11 +125,11 @@
 <sect2><title>Configuring the Database for Replication.</title>
 
 <para>Creating the configuration tables, stored procedures, triggers
-and configuration is all done through the <link linkend="slonik"><application>slonik</application></link> tool. It is
-a specialized scripting aid that mostly calls stored procedures in the
-master/slave (node) databases.  The script to create the initial
-configuration for the simple master-slave setup of our pgbench
-database looks like this:
+and configuration is all done through the <xref linkend="slonik">
+tool. It is a specialized scripting aid that mostly calls stored
+procedures in the master/slave (node) databases.  The script to create
+the initial configuration for the simple master-slave setup of our
+pgbench database looks like this:
 
 <programlisting>
 #!/bin/sh
@@ -212,13 +212,11 @@
 slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"
 </programlisting>
 </para>
-<para>Even though we have the <application><link linkend="slon">slon</link></application>
- running on both the master and slave, and they
-are both spitting out diagnostics and other messages, we aren't
-replicating any data yet.  The notices you are seeing is the
-synchronization of cluster configurations between the 2
-<application><link linkend="slon">slon</link></application>
-processes.</para>
+<para>Even though we have the <xref linkend="slon"> running on both
+the master and slave, and they are both spitting out diagnostics and
+other messages, we aren't replicating any data yet.  The notices you
+are seeing is the synchronization of cluster configurations between
+the 2 <xref linkend="slon"> processes.</para>
 
 <para>To start replicating the 4 pgbench tables (set 1) from the
 master (node id 1) the the slave (node id 2), execute the following
Index: prerequisites.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/prerequisites.sgml
+++ doc/adminguide/prerequisites.sgml
@@ -13,22 +13,20 @@
 <para>There have been reports of success at running &slony1; hosts
 that are running PostgreSQL on Microsoft
 <trademark>Windows</trademark>.  At this time, the
-<quote>binary</quote> applications (<emphasis>e.g.</emphasis> -
-<application><link linkend="slonik">slonik</link></application>,
-<application><link linkend="slon">slon</link></application>) do not
-run on <trademark>Windows</trademark>, but a <application><link
-    linkend="slon">slon</link></application> running on one of the
-Unix-like systems has no reason to have difficulty connect to a
-PostgreSQL instance running on <trademark>Windows</trademark>.</para>
-
-<para> It ought to be possible to port <application><link
-    linkend="slon"> slon </link></application> and <application><link
-    linkend="slonik"> slonik </link></application> to run on
-<trademark>Windows</trademark>; the conspicuous challenge is of having
-a POSIX-like <filename>pthreads</filename> implementation for
-<application><link linkend="slon"> slon </link></application>, as it
-uses that to have multiple threads of execution.  There are reports of
-there being a <filename>pthreads</filename> library for
+<quote>binary</quote> applications (<emphasis>e.g.</emphasis> - <xref
+linkend="slonik">, <xref linkend="slon">) do not run on
+<trademark>Windows</trademark>, but a <xref linkend="slon"> running on
+one of the Unix-like systems has no reason to have difficulty connect
+to a PostgreSQL instance running on
+<trademark>Windows</trademark>.</para>
+
+<para> It ought to be possible to port <xref linkend="slon"> and <xref
+linkend="slonik"> to run on <trademark>Windows</trademark>; the
+conspicuous challenge is of having a POSIX-like
+<filename>pthreads</filename> implementation for <xref
+linkend="slon">, as it uses that to have multiple threads of
+execution.  There are reports of there being a
+<filename>pthreads</filename> library for
 <trademark>Windows</trademark>, so nothing should prevent some
 interested party from volunteering to do the port.</para>
 
@@ -100,12 +98,12 @@
 <title> Time Synchronization</title>
 
 <para> All the servers used within the replication cluster need to
-have their Real Time Clocks in sync. This is to ensure that <link
-    linkend="slon"> slon </link> doesn't generate errors with messages
-indicating that a subscriber is already ahead of its provider during
-replication.  We recommend you have <application>ntpd</application>
-running on all nodes, where subscriber nodes using the
-<quote>master</quote> provider host as their time server.</para>
+have their Real Time Clocks in sync. This is to ensure that <xref
+linkend="slon"> doesn't generate errors with messages indicating that
+a subscriber is already ahead of its provider during replication.  We
+recommend you have <application>ntpd</application> running on all
+nodes, where subscriber nodes using the <quote>master</quote> provider
+host as their time server.</para>
 
 <para> It is possible for &slony1; itself to function even in the face
 of there being some time discrepancies, but having systems <quote>in
@@ -158,10 +156,9 @@
 any node in the cluster to any other node in the cluster.</para>
 
 <para>For ease of configuration, network addresses should be
-consistent across all of the nodes.  <link linkend="stmtstorepath">
-<command>STORE PATH</command> </link> does allow them to vary, but
-down this road lies madness as you try to manage the multiplicity of
-paths pointing to the same server.</para>
+consistent across all of the nodes.  <xref linkend="stmtstorepath">
+does allow them to vary, but down this road lies madness as you try to
+manage the multiplicity of paths pointing to the same server.</para>
 
 <para>A possible workaround for this, in environments where firewall
 rules are particularly difficult to implement, may be to establish
Index: subscribenodes.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.sgml,v
retrieving revision 1.10
retrieving revision 1.11
diff -Ldoc/adminguide/subscribenodes.sgml -Ldoc/adminguide/subscribenodes.sgml -u -w -r1.10 -r1.11
--- doc/adminguide/subscribenodes.sgml
+++ doc/adminguide/subscribenodes.sgml
@@ -2,14 +2,13 @@
 <sect1 id="subscribenodes"> <title>Subscribing Nodes</title>
 
 <para>Before you subscribe a node to a set, be sure that you have
-<application><link linkend="slon"> slon </link></application>
-processes running for both the provider and the new subscribing node. If
-you don't have slons running, nothing will happen, and you'll beat
-your head against a wall trying to figure out what is going on.</para>
-
-<para>Subscribing a node to a set is done by issuing the <link
-   linkend="slonik"> slonik </link> command <command> <link
-    linkend="stmtsubscribeset"> subscribe set </link> </command>. It may
+<xref linkend="slon"> processes running for both the provider and the
+new subscribing node. If you don't have slons running, nothing will
+happen, and you'll beat your head against a wall trying to figure out
+what is going on.</para>
+
+<para>Subscribing a node to a set is done by issuing the <xref
+linkend="slonik"> command <xref linkend="stmtsubscribeset">. It may
 seem tempting to try to subscribe several nodes to a set within a
 single try block like this:
 
@@ -30,12 +29,11 @@
 sets in that fashion. The proper procedure is to subscribe one node at
 a time, and to check the logs and databases before you move onto
 subscribing the next node to the set. It is also worth noting that the
-<quote>success</quote> within the above <link linkend="slonik">
-<application>slonik</application> </link> try block does not imply
-that nodes 2, 3, and 4 have all been successfully subscribed. It
-merely indicates that the slonik commands were successfully received
-by the <application>slon</application> running on the origin
-node.</para>
+<quote>success</quote> within the above <xref linkend="slonik"> try
+block does not imply that nodes 2, 3, and 4 have all been successfully
+subscribed. It merely indicates that the slonik commands were
+successfully received by the <application>slon</application> running
+on the origin node.</para>
 
 <para>A typical sort of problem that will arise is that a cascaded
 subscriber is looking for a provider that is not ready yet.  In that
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.21
retrieving revision 1.22
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.21 -r1.22
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -6,15 +6,14 @@
 <question><para>I looked for the <envar>_clustername</envar> namespace, and
 it wasn't there.</para></question>
 
-<answer><para> If the DSNs are wrong, then <link linkend="slon">
-<application>slon</application></link> instances can't connect to the nodes.</para>
+<answer><para> If the DSNs are wrong, then <xref linkend="slon">
+instances can't connect to the nodes.</para>
 
 <para>This will generally lead to nodes remaining entirely untouched.</para>
 
-<para>Recheck the connection configuration.  By the way, since <link
-linkend="slon"><application>slon</application></link> links to libpq, you could
-have password information stored in <filename>
-$HOME/.pgpass</filename>, partially filling in
+<para>Recheck the connection configuration.  By the way, since <xref
+linkend="slon"> links to libpq, you could have password information
+stored in <filename> $HOME/.pgpass</filename>, partially filling in
 right/wrong authentication information there.</para>
 </answer>
 </qandaentry>
@@ -72,8 +71,8 @@
 </answer>
 </qandaentry>
 <qandaentry>
-<question><para> <link linkend="slon"> <application>slon</application></link> does
-not restart after crash</para>
+<question><para> <xref linkend="slon"> does not restart after
+crash</para>
 
 <para> After an immediate stop of postgresql (simulation of system
 crash) in pg_catalog.pg_listener a tuple with
@@ -88,8 +87,7 @@
 <envar>pg_catalog.pg_listener</envar>, used by
 <productname>PostgreSQL</productname> to manage event notifications,
 contains some entries that are pointing to backends that no longer
-exist.  The new <link linkend="slon">
-<application>slon</application></link> instance connects to the
+exist.  The new <xref linkend="slon"> instance connects to the
 database, and is convinced, by the presence of these entries, that an
 old <application>slon</application> is still servicing this &slony1;
 node.</para>
@@ -112,9 +110,8 @@
 restart node 4;
 </programlisting></para>
 
-<para> <command> <link linkend="stmtrestartnode">RESTART NODE</link>
-</command> cleans up dead notifications so that you can restart the
-node.</para>
+<para> <xref linkend="stmtrestartnode"> cleans up dead notifications
+so that you can restart the node.</para>
 
 <para>As of version 1.0.5, the startup process of slon looks for this
 condition, and automatically cleans it up.</para>
@@ -251,13 +248,12 @@
 CONTEXT:  PL/pgSQL function "setaddtable_int" line 71 at SQL statement
 </screen></para></question>
 
-<answer><para> The table IDs used in <command><link
-linkend="stmtsetaddtable">SET ADD TABLE</link></command> are
-required to be unique <emphasis>ACROSS ALL SETS</emphasis>.  Thus, you
-can't restart numbering at 1 for a second set; if you are numbering
-them consecutively, a subsequent set has to start with IDs after where
-the previous set(s) left off.</para>
-</answer> </qandaentry>
+<answer><para> The table IDs used in <xref linkend="stmtsetaddtable">
+are required to be unique <emphasis>ACROSS ALL SETS</emphasis>.  Thus,
+you can't restart numbering at 1 for a second set; if you are
+numbering them consecutively, a subsequent set has to start with IDs
+after where the previous set(s) left off.</para> </answer>
+</qandaentry>
 
 <qandaentry>
 <question><para>I need to drop a table from a replication set</para></question>
@@ -275,11 +271,11 @@
 command SET DROP TABLE, which will "do the trick."</para></listitem>
 
 <listitem><para> If you are still using 1.0.1 or 1.0.2, the
-<emphasis>essential functionality of <command><link linkend="stmtsetdroptable">SET DROP TABLE</link></command> involves
-the functionality in <function>droptable_int()</function>.  You can
-fiddle this by hand by finding the table ID for the table you want to
-get rid of, which you can find in sl_table, and then run the following
-three queries, on each host:</emphasis>
+<emphasis>essential functionality of <xref linkend="stmtsetdroptable">
+involves the functionality in <function>droptable_int()</function>.
+You can fiddle this by hand by finding the table ID for the table you
+want to get rid of, which you can find in sl_table, and then run the
+following three queries, on each host:</emphasis>
 
 <programlisting>
   select _slonyschema.alterTableRestore(40);
@@ -293,12 +289,11 @@
 
 <para> You'll have to run these three queries on all of the nodes,
 preferably firstly on the origin node, so that the dropping of this
-propagates properly.  Implementing this via a <link linkend="slonik">
-slonik </link> statement with a new &slony1; event would do that.
-Submitting the three queries using <command> <link
-linkend="stmtddlscript"> EXECUTE SCRIPT </link> </command> could do
-that.  Also possible would be to connect to each database and submit
-the queries by hand.</para></listitem> </itemizedlist></para>
+propagates properly.  Implementing this via a <xref linkend="slonik">
+statement with a new &slony1; event would do that.  Submitting the
+three queries using <xref linkend="stmtddlscript"> could do that.
+Also possible would be to connect to each database and submit the
+queries by hand.</para></listitem> </itemizedlist></para>
 </answer>
 </qandaentry>
 
@@ -306,10 +301,8 @@
 <question><para>I need to drop a sequence from a replication set</para></question>
 
 <answer><para></para><para>If you are running 1.0.5 or later, there is
-a <command> <link linkend="stmtsetdropsequence"> SET DROP SEQUENCE
-</link></command> command in Slonik to allow you to do this,
-parallelling <command> <link linkend="stmtsetdroptable"> SET DROP
-TABLE</link></command>.</para>
+a <xref linkend="stmtsetdropsequence"> command in Slonik to allow you
+to do this, parallelling <xref linkend="stmtsetdroptable">.</para>
 
 <para>If you are running 1.0.2 or earlier, the process is a bit more manual.</para>
 
@@ -336,15 +329,14 @@
 </programlisting></para>
 
 <para>Those two queries could be submitted to all of the nodes via
-<function>ddlscript()</function> / <command> <link
-linkend="stmtddlscript"> EXECUTE SCRIPT </link> </command>, thus
-eliminating the sequence everywhere <quote>at once.</quote> Or they
-may be applied by hand to each of the nodes.</para>
-
-<para>Similarly to <command> <link linkend="stmtsetdroptable"> SET
-DROP TABLE </link> </command>, this is implemented &slony1; version
-1.0.5 as <command> <link linkend="stmtsetdropsequence"> SET DROP
-SEQUENCE</link></command>.</para></answer></qandaentry>
+<xref linkend="function.ddlscript-integer-text-integer"> / <xref
+linkend="stmtddlscript">, thus eliminating the sequence everywhere
+<quote>at once.</quote> Or they may be applied by hand to each of the
+nodes.</para>
+
+<para>Similarly to <xref linkend="stmtsetdroptable">, this is
+implemented &slony1; version 1.0.5 as <xref
+linkend="stmtsetdropsequence">.</para></answer></qandaentry>
 
 <qandaentry>
 <question><para>Slony-I: cannot add table to currently subscribed set 1</para>
@@ -480,9 +472,8 @@
 
 <para>&slony1; 1.1 provides a stored procedure that allows
 <command>SYNC</command> counts to be updated on the origin based on a
-<application>cron</application> job even if there is no <link
-linkend="slon"> <application>slon</application></link> daemon
-running.</para> </answer></qandaentry>
+<application>cron</application> job even if there is no <xref
+linkend="slon"> daemon running.</para> </answer></qandaentry>
 
 <qandaentry>
 <question><para>I pointed a subscribing node to a different provider
@@ -499,19 +490,16 @@
 </itemizedlist></para>
 
 <para>The subscription for node 3 was changed to have node 1 as
-provider, and we did <command> <link linkend="stmtdropset"> DROP
-SET</link></command>/<command> <link linkend="stmtsubscribeset">
-SUBSCRIBE SET</link> </command> for node 2 to get it
-repopulating.</para>
+provider, and we did <<xref linkend="stmtdropset"> /<xref
+linkend="stmtsubscribeset"> for node 2 to get it repopulating.</para>
 
 <para>Unfortunately, replication suddenly stopped to node 3.</para>
 
 <para>The problem was that there was not a suitable set of
 <quote>listener paths</quote> in sl_listen to allow the events from
 node 1 to propagate to node 3.  The events were going through node 2,
-and blocking behind the <command> <link linkend="stmtsubscribeset">
-SUBSCRIBE SET </link> </command> event that node 2 was working
-on.</para>
+and blocking behind the <xref linkend="stmtsubscribeset"> event that
+node 2 was working on.</para>
 
 <para>The following slonik script dropped out the listen paths where
 node 3 had to go through node 2, and added in direct listens between
@@ -538,28 +526,25 @@
 <itemizedlist>
 
 <listitem><para> If you have multiple nodes, and cascaded subscribers,
-you need to be quite careful in populating the <command> <link
-linkend="stmtstorelisten"> STORE LISTEN </link></command> entries, and
-in modifying them if the structure of the replication
-<quote>tree</quote> changes.</para></listitem>
+you need to be quite careful in populating the <xref
+linkend="stmtstorelisten"> entries, and in modifying them if the
+structure of the replication <quote>tree</quote>
+changes.</para></listitem>
 
 <listitem><para> Version 1.1 should provide better tools to help
 manage this.</para>
 
-<para> In fact, it does.  <link linkend="autolisten"> Automated Listen
-Path Generation </link> provides a heuristic to generate listener
-entries.  If you are still tied to earlier versions, a Perl script,
-<link linkend="regenlisten">
-<application>regenerate-listens.pl</application> </link>, provides a
-way of querying a live &slony1; instance and generating the <link
-linkend="slonik"> Slonik </link> commands to generate the listen path
+<para> In fact, it does.  <xref linkend="autolisten"> provides a
+heuristic to generate listener entries.  If you are still tied to
+earlier versions, a Perl script, <xref linkend="regenlisten">,
+provides a way of querying a live &slony1; instance and generating the
+<xref linkend="slonik"> commands to generate the listen path
 network.</para></listitem>
 
 </itemizedlist></para>
 
-<para>The issues of <quote>listener paths</quote> are discussed further at
-<link linkend="listenpaths"> Slony Listen Paths </link></para>
-</answer>
+<para>The issues of <quote>listener paths</quote> are discussed
+further at <xref linkend="listenpaths"> </para></answer>
 </qandaentry>
 
 <qandaentry id="faq17">
@@ -594,10 +579,10 @@
 (6 rows)
 </screen></para>
 
-<para>In version 1.0.5, the <command><link linkend="stmtdropnode">
-drop node </link> </command> function purges out entries in sl_confirm
-for the departing node.  In earlier versions, this needs to be done
-manually.  Supposing the node number is 3, then the query would be:
+<para>In version 1.0.5, the <xref linkend="stmtdropnode"> function
+purges out entries in sl_confirm for the departing node.  In earlier
+versions, this needs to be done manually.  Supposing the node number
+is 3, then the query would be:
 
 <screen>
 delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;
@@ -716,9 +701,9 @@
 
 <qandaentry>
 
-<question><para> If you have a <link linkend="slonik">slonik</link>
-script something like this, it will hang on you and never complete,
-because you can't have <command>wait for event</command> inside a
+<question><para> If you have a <xref linkend="slonik"> script
+something like this, it will hang on you and never complete, because
+you can't have <command>wait for event</command> inside a
 <command>try</command> block. A <command>try</command> block is
 executed as one transaction, and the event that you are waiting for
 can never arrive inside the scope of the transaction.
@@ -741,9 +726,8 @@
 }
 </programlisting></para></question>
 
-<answer><para> You must not invoke <command> <link
-linkend="stmtwaitevent"> wait for event</link> </command> inside a
-<quote>try</quote> block.</para></answer>
+<answer><para> You must not invoke <xref linkend="stmtwaitevent">
+inside a <quote>try</quote> block.</para></answer>
 
 </qandaentry>
 
@@ -759,11 +743,10 @@
 </answer>
 
 <answer> <para>(Jan Wieck comments:) The order of table ID's is only
-significant during a <command> <link linkend="stmtlockset"> LOCK
-SET</link> </command> in preparation of switchover. If that order is
-different from the order in which an application is acquiring its
-locks, it can lead to deadlocks that abort either the application or
-<application>slon</application>.
+significant during a <xref linkend="stmtlockset"> in preparation of
+switchover. If that order is different from the order in which an
+application is acquiring its locks, it can lead to deadlocks that
+abort either the application or <application>slon</application>.
 </para>
 </answer>
 
@@ -784,20 +767,17 @@
 </question>
 
 <answer><para> Firstly, let's look at how it is handled
-<emphasis>absent</emphasis> of the special handling of the <link
-linkend="stmtstoretrigger"> <command>STORE TRIGGER</command> </link>
-Slonik command.  </para>
-
-<para> The function <link
-linkend="function.altertableforreplication-integer">
-altertableforreplication(table id) </link> prepares each table for
-replication.
+<emphasis>absent</emphasis> of the special handling of the <xref
+linkend="stmtstoretrigger"> Slonik command.  </para>
+
+<para> The function <xref
+linkend="function.altertableforreplication-integer"> prepares each
+table for replication.
 
 <itemizedlist>
 
 <listitem><para> On the origin node, this involves adding a trigger
-that uses the <link linkend="function.logtrigger">
-<function>logTrigger</function>() </link> function to the
+that uses the <xref linkend="function.logtrigger"> function to the
 table.</para>
 
 <para> That trigger initiates the action of logging all updates to the
@@ -827,8 +807,8 @@
 
 </answer>
 
-<answer> <para> Now, consider how <link linkend="stmtstoretrigger">
-<command>STORE TRIGGER</command> </link> enters into things.</para>
+<answer> <para> Now, consider how <xref linkend="stmtstoretrigger">
+enters into things.</para>
 
 <para> Simply put, this command causes
 &slony1; to restore the trigger using
@@ -858,9 +838,8 @@
 DETAIL:  Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path".
 </screen>
 
-<para> This is then followed by a series of failed syncs as the
-<application> <link linkend="slon"> slon </link> </application> shuts
-down:
+<para> This is then followed by a series of failed syncs as the <xref
+linkend="slon"> shuts down:
 
 <screen>
 DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC
@@ -877,19 +856,17 @@
 
 </para></question>
 
-<answer><para> If you see a <application> <link linkend="slon"> slon
-</link> </application> shutting down with <emphasis>ignore new events
-due to shutdown</emphasis> log entries, you'll typically have to step
-back to <emphasis>before</emphasis> they started failing to see
-indication of the root cause of the problem.
+<answer><para> If you see a <xref linkend="slon"> shutting down with
+<emphasis>ignore new events due to shutdown</emphasis> log entries,
+you'll typically have to step back to <emphasis>before</emphasis> they
+started failing to see indication of the root cause of the problem.
 
 </para></answer>
 
 <answer><para> In this particular case, the problem was that some of
-the <link linkend="stmtstorepath"> <command>STORE PATH </command>
-</link> commands had not yet made it to node 4 before the <link
-linkend="stmtsubscribeset"> <command>SUBSCRIBE SET </command> </link>
-command propagated. </para>
+the <xref linkend="stmtstorepath"> commands had not yet made it to
+node 4 before the <xref linkend="stmtsubscribeset"> command
+propagated. </para>
 
 <para>This is yet another example of the need to not do things too
 terribly quickly; you need to be sure things are working right
@@ -901,22 +878,20 @@
 
 <qandaentry>
 
-<question><para>I just used <link linkend="stmtmoveset"> <command>MOVE
-SET</command> </link> to move the origin to a new node.
-Unfortunately, some subscribers are still pointing to the former
-origin node, so I can't take it out of service for maintenance without
-stopping them from getting updates.  What do I do?  </para></question>
-
-<answer><para> You need to use <link linkend="stmtsubscribeset">
-<command>SUBSCRIBE SET</command> </link> to alter the subscriptions
-for those nodes to have them subscribe to a provider that
-<emphasis>will</emphasis> be sticking around during the
+<question><para>I just used <xref linkend="stmtmoveset"> to move the
+origin to a new node.  Unfortunately, some subscribers are still
+pointing to the former origin node, so I can't take it out of service
+for maintenance without stopping them from getting updates.  What do I
+do?  </para></question>
+
+<answer><para> You need to use <xref linkend="stmtsubscribeset"> to
+alter the subscriptions for those nodes to have them subscribe to a
+provider that <emphasis>will</emphasis> be sticking around during the
 maintenance.</para>
 
-<warning> <para> What you <emphasis>don't</emphasis> do is to <link
-linkend="stmtunsubscribeset"> <command>UNSUBSCRIBE SET</command>
-</link>; that would require reloading all data for the nodes from
-scratch later.
+<warning> <para> What you <emphasis>don't</emphasis> do is to <xref
+linkend="stmtunsubscribeset">; that would require reloading all data
+for the nodes from scratch later.
 
 </para></warning>
 </answer>
@@ -1006,8 +981,7 @@
 
 <answer> <para> Cause: you have likely issued <command>alter
 table</command> statements directly on the databases instead of using
-the slonik <link linkend="stmtddlscript"> <command>EXECUTE
-SCRIPT</command> </link> command.
+the slonik <xref linkend="stmtddlscript"> command.
 
 <para>The solution is to rebuild the trigger on the affected table and
 fix the entries in <envar>sl_log_1 </envar> by hand.
@@ -1083,9 +1057,8 @@
 DETAIL:  Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path".
 </screen>
 
-<para> This is then followed by a series of failed syncs as the
-<application> <link linkend="slon"> slon </link> </application> shuts
-down:
+<para> This is then followed by a series of failed syncs as the <xref
+linkend="slon"> shuts down:
 
 <screen>
 DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC
@@ -1102,18 +1075,16 @@
 
 </para></question>
 
-<answer><para> If you see a <application> <link linkend="slon"> slon
-</link> </application> shutting down with <emphasis>ignore new events
-due to shutdown</emphasis> log entries, you typically need to step
-back in the log to <emphasis>before</emphasis> they started failing to
-see indication of the root cause of the problem.
-</para></answer>
+<answer><para> If you see a <xref linkend="slon"> shutting down with
+<emphasis>ignore new events due to shutdown</emphasis> log entries,
+you typically need to step back in the log to
+<emphasis>before</emphasis> they started failing to see indication of
+the root cause of the problem.  </para></answer>
 
 <answer><para> In this particular case, the problem was that some of
-the <link linkend="stmtstorepath"> <command>STORE PATH </command>
-</link> commands had not yet made it to node 4 before the <link
-linkend="stmtsubscribeset"> <command>SUBSCRIBE SET </command> </link>
-command propagated. </para>
+the <xref linkend="stmtstorepath"> commands had not yet made it to
+node 4 before the <xref linkend="stmtsubscribeset"> command
+propagated. </para>
 
 <para>This demonstrates yet another example of the need to not do
 things in a rush; you need to be sure things are working right
@@ -1162,17 +1133,14 @@
 
 <qandaentry>
 <question> <para> I had a network <quote>glitch</quote> that led to my
-using <command><link linkend="stmtfailover">FAILOVER</link></command>
-to fail over to an alternate node.  The failure wasn't a disk problem
-that would corrupt databases; why do I need to rebuild the failed node
-from scratch? </para></question>
-
-<answer><para> The action of <command><link
-linkend="stmtfailover">FAILOVER</link></command> is to
-<emphasis>abandon</emphasis> the failed node so that no more
-&slony1; activity goes to or from that node.
-As soon as that takes place, the failed node will progressively fall
-further and further out of sync.
+using <xref linkend="stmtfailover"> to fail over to an alternate node.
+The failure wasn't a disk problem that would corrupt databases; why do
+I need to rebuild the failed node from scratch? </para></question>
+
+<answer><para> The action of <xref linkend="stmtfailover"> is to
+<emphasis>abandon</emphasis> the failed node so that no more &slony1;
+activity goes to or from that node.  As soon as that takes place, the
+failed node will progressively fall further and further out of sync.
 </para></answer>
 
 <answer><para> The <emphasis>big</emphasis> problem with trying to
@@ -1183,12 +1151,10 @@
 never was a disk failure making it <quote>physical.</quote>
 </para></answer>
 
-<answer><para> As discusssed in the section on <link
-linkend="failover"> Doing switchover and failover with
-&slony1;</link>, using <command><link
-linkend="stmtfailover">FAILOVER</link></command> should be considered
-a <emphasis>last resort</emphasis> as it implies that you are
-abandoning the origin node as being corrupted.  </para></answer>
+<answer><para> As discusssed in <xref linkend="failover">, using <xref
+linkend="stmtfailover"> should be considered a <emphasis>last
+resort</emphasis> as it implies that you are abandoning the origin
+node as being corrupted.  </para></answer>
 </qandaentry>
 
 <qandaentry id="morethansuper">
@@ -1279,10 +1245,10 @@
 </qandaentry>
 
 <qandaentry>
-<question><para> Node #1 was dropped via <link linkend="stmtdropnode">
-<command>DROP NODE</command> </link>, and the <link linkend="slon">
-<application>slon</application> </link> one of the other nodes is
-repeatedly failing with the error message:
+
+<question><para> Node #1 was dropped via <xref
+linkend="stmtdropnode">, and the <xref linkend="slon"> one of the
+other nodes is repeatedly failing with the error message:
 
 <screen>
 ERROR  remoteWorkerThread_3: "begin transaction; set transaction isolation level
@@ -1299,10 +1265,8 @@
 DEBUG1 syncThread: thread done
 </screen>
 
-<para> Evidently, a <link linkend="stmtstorelisten"> <command>STORE
-LISTEN</command> </link> request hadn't propagated yet before node 1
-was dropped.
-</para></question>
+<para> Evidently, a <xref linkend="stmtstorelisten"> request hadn't
+propagated yet before node 1 was dropped.  </para></question>
 
 <answer id="eventsurgery"><para> This points to a case where you'll
 need to do <quote>event surgery</quote> on one or more of the nodes.
@@ -1317,10 +1281,9 @@
 <para> That implies that the event is stored on node #2, as it
 wouldn't be on node #3 if it had not already been processed
 successfully.  The easiest way to cope with this situation is to
-delete the offending <link linkend="table.sl-event">
-<envar>sl_event</envar> </link> entry on node #2.  You'll connect to
-node #2's database, and search for the <command>STORE_LISTEN</command>
-event:
+delete the offending <xref linkend="table.sl-event"> entry on node #2.
+You'll connect to node #2's database, and search for the
+<command>STORE_LISTEN</command> event:
 
 <para> <command> select * from sl_event where ev_type =
 'STORE_LISTEN';</command>
Index: usingslonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/usingslonik.sgml,v
retrieving revision 1.4
retrieving revision 1.5
diff -Ldoc/adminguide/usingslonik.sgml -Ldoc/adminguide/usingslonik.sgml -u -w -r1.4 -r1.5
--- doc/adminguide/usingslonik.sgml
+++ doc/adminguide/usingslonik.sgml
@@ -26,10 +26,10 @@
 
 <listitem><para> People have observed that
 <application>Slonik</application> does not provide any notion of
-iteration.  It is common to want to create a set of similar <link
-      linkend="stmtstorepath"> <command>STORE PATH</command></link> entries,
-since, in most cases, hosts will likely access a particular server via
-the same host name or IP address.</para></listitem>
+iteration.  It is common to want to create a set of similar <xref
+linkend="stmtstorepath"> entries, since, in most cases, hosts will
+likely access a particular server via the same host name or IP
+address.</para></listitem>
 
 <listitem><para> Users seem interested in wrapping everything possible
 in <command>TRY</command> blocks, which is regrettably
@@ -74,8 +74,8 @@
 <para> The test bed found in the <filename>src/ducttape</filename>
 directory takes this approach.</para></listitem>
 
-<listitem><para> The <link linkend="altperl"> altperl admin scripts
-</link> use Perl code to generate Slonik scripts.</para>
+<listitem><para> The <xref linkend="altperl"> use Perl code to
+generate Slonik scripts.</para>
 
 <para> You define the cluster as a set of Perl objects; each script
 walks through the Perl objects as needed to satisfy whatever it is
@@ -236,10 +236,9 @@
 
 <para> A more sophisticated approach might involve defining some
 common components, notably the <quote>preamble</quote> that consists
-of the <command><link linkend="clustername">CLUSTER
-NAME</link></command> <command><link linkend="admconninfo">ADMIN
-CONNINFO</link></command> commands that are common to every Slonik
-script, thus:
+of the <xref linkend="clustername"> <xref linkend="admconninfo">
+commands that are common to every Slonik script, thus:
+
 <programlisting>
 CLUSTER=T1
 DB1=slony_test1
@@ -382,7 +381,39 @@
 <para> <command> select _slonycluster.storelisten(pa_server,
 pa_server, pa_client) from _slonycluster.sl_path;</command></para>
 
-<para> 
+<para> The result of this set of queries is to regenerate
+<emphasis/and propagate/ the listen paths.  By running the main
+<function/ _slonycluster.storelisten()/ function,
+<command/STORE_LISTEN/ events are raised to cause the listen paths to
+propagate to the other nodes in the cluster.
+
+<para> If there was a <emphasis/local/ problem on one node, and you
+didn't want the updates to propagate (this would be an unusual
+situation; you almost certainly want to fix things
+<emphasis/everywhere/), the queries would instead be:
+
+<para> <command> select
+slonycluster.droplisten_int(li_origin,li_provider,li_receiver) from
+_slonycluster.sl_listen;</command></para>
+
+<para> <command> select _slonycluster.storelisten_int(pa_server,
+pa_server, pa_client) from _slonycluster.sl_path;</command></para>
+
+<para> If you are planning to add &slony1; support to other tools
+(<emphasis>e.g.</emphasis> - adding replication support to something
+like <ulink url="http://www.pgadmin.org/"> <productname>pgAdmin
+III</productname> </ulink>), you need to be clear on where various
+functions need to be called.  The normal <quote>protocol</quote> is
+thus:
+
+<itemizedlist>
+
+<listitem><para> The <quote>main</quote> function
+(<emphasis>e.g.</emphasis> - without the <command>_int</command>
+suffix) is called on a <quote>relevant</quote> node in the &slony1;
+cluster.
+
+</itemizedlist>
 
 </sect1>
 <!-- Keep this comment at the end of the file
Index: reshape.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/reshape.sgml
+++ doc/adminguide/reshape.sgml
@@ -9,43 +9,39 @@
 
 <listitem><para> If you want a node that is a subscriber to become the
 origin for a particular replication set, you will have to issue a
-suitable <link linkend="slonik"> slonik </link> <command>MOVE SET</command>
+suitable <xref linkend="slonik"> <command>MOVE SET</command>
 operation.</para></listitem>
 
 <listitem><para> You may subsequently, or instead, wish to modify the
 subscriptions of other nodes.  You might want to modify a node to get
 its data from a different provider, or to change it to turn forwarding
-on or off.  This can be accomplished by issuing the slonik <command>
-<link linkend="stmtsubscribeset"> SUBSCRIBE SET</link> </command>
-operation with the new subscription information for the node; &slony1;
-will change the configuration.</para></listitem>
+on or off.  This can be accomplished by issuing the slonik <xref
+linkend="stmtsubscribeset"> operation with the new subscription
+information for the node; &slony1; will change the
+configuration.</para></listitem>
 
 <listitem><para> If the directions of data flows have changed, it is
-doubtless appropriate to issue a set of <command><link
-linkend="stmtdroplisten"> DROP LISTEN</link></command> operations to
-drop out obsolete paths between nodes and <command><link
-linkend="stmtstorelisten">STORE LISTEN </link></command> to add the
-new ones.  At present, this is not changed automatically; at some
-point, <command> <link linkend="stmtmoveset"> MOVE
-SET</link></command> and <command> <link linkend="stmtsubscribeset">
-SUBSCRIBE SET</link> </command> might change the paths as a
-side-effect.  See <link linkend="listenpaths"> Slony Listen Paths
-</link> for more information about this.  In version 1.1 and later, it
-is likely that the generation of <link linkend="table.sl-listen">
-<envar>sl_listen</envar></link> entries will be entirely automated,
-where they will be regenerated when changes are made to <link
-linkend="table.sl-path"> <envar>sl_path</envar></link> or <link
-linkend="table.sl-path"> <envar>sl_subscribe</envar></link>, thereby
-making it unnecessary to even think about <command> <link
-linkend="stmtstorelisten"> STORE LISTEN
-</link></command>.</para></listitem>
+doubtless appropriate to issue a set of <xref
+linkend="stmtdroplisten"> operations to drop out obsolete paths
+between nodes and <xref linkend="stmtstorelisten"> to add the new
+ones.  At present, this is not changed automatically; at some point,
+<xref linkend="stmtmoveset"> and <xref
+linkend="stmtsubscribeset"> might change the paths as a side-effect.
+See <xref linkend="listenpaths"> for more information about this.  In
+version 1.1 and later, it is likely that the generation of <xref
+linkend="table.sl-listen"> entries will be entirely automated, where
+they will be regenerated when changes are made to <xref
+linkend="table.sl-path"> or <xref linkend="table.sl-path">, thereby
+making it unnecessary to even think about <xref
+linkend="stmtstorelisten">.</para></listitem>
 
 </itemizedlist>
 </para>
 <para> The <filename>altperl</filename> toolset includes a
-<application>regenerate-listens.pl</application> script that is up to the task of
-creating the new <command>STORE LISTEN</command> commands; it isn't,
-however, smart enough to know what listener paths should be dropped.
+<application>regenerate-listens.pl</application> script that is up to
+the task of creating the new <xref linkend="stmtstorelisten">
+commands; it isn't, however, smart enough to know what listener paths
+should be dropped.
 </para>
 
 </sect1>
Index: startslons.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/startslons.sgml,v
retrieving revision 1.9
retrieving revision 1.10
diff -Ldoc/adminguide/startslons.sgml -Ldoc/adminguide/startslons.sgml -u -w -r1.9 -r1.10
--- doc/adminguide/startslons.sgml
+++ doc/adminguide/startslons.sgml
@@ -4,14 +4,13 @@
 <para>The programs that actually perform &slony1; replication are the
 <application>slon</application> daemons.</para>
 
-<para>You need to run one <application><link linkend="slon"> slon
-</link></application> instance for each node in a &slony1; cluster,
-whether you consider that node a <quote>master</quote> or a
-<quote>slave</quote>. Since a <command>MOVE SET</command> or
-<command>FAILOVER</command> can switch the roles of nodes, slon needs
-to be able to function for both providers and subscribers.  It is not
-essential that these daemons run on any particular host, but there are
-some principles worth considering:
+<para>You need to run one <xref linkend="slon"> instance for each node
+in a &slony1; cluster, whether you consider that node a
+<quote>master</quote> or a <quote>slave</quote>. Since a <command>MOVE
+SET</command> or <command>FAILOVER</command> can switch the roles of
+nodes, slon needs to be able to function for both providers and
+subscribers.  It is not essential that these daemons run on any
+particular host, but there are some principles worth considering:
 
 <itemizedlist>
 
@@ -46,17 +45,17 @@
 
 <listitem><para> <filename>tools/altperl/slon_watchdog.pl</filename> -
 an <quote>early</quote> version that basically wraps a loop around the
-invocation of <application><link linkend="slon"> slon
-</link></application>, restarting any time it falls over</para>
+invocation of <xref linkend="slon">, restarting any time it falls
+over</para>
 </listitem>
 
 <listitem><para> <filename>tools/altperl/slon_watchdog2.pl</filename>
 - a somewhat more intelligent version that periodically polls the
 database, checking to see if a <command>SYNC</command> has taken place
 recently.  We have had VPN connections that occasionally fall over
-without signalling the application, so that the <application><link
-       linkend="slon"> slon </link></application> stops working, but doesn't
-actually die; this polling addresses that issue.</para></listitem>
+without signalling the application, so that the <xref linkend="slon">
+stops working, but doesn't actually die; this polling addresses that
+issue.</para></listitem>
 
 </itemizedlist></para>
 
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.10
retrieving revision 1.11
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.10 -r1.11
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -7,40 +7,38 @@
 
 <para>This can be fairly easily remedied.</para>
 
-<para>You cannot directly use <link linkend="slonik">slonik</link>
-commands <command><link linkend="stmtsetaddtable"> SET ADD
-TABLE</link></command> or <command><link linkend="stmtsetaddsequence">
-SET ADD SEQUENCE</link></command> in order to add tables and sequences
+<para>You cannot directly use <xref linkend="slonik">
+<xref linkend="stmtsetaddtable"> or <xref linkend="stmtsetaddsequence">
+in order to add tables and sequences
 to a replication set that is presently replicating; you must instead
 create a new replication set.  Once it is identically subscribed
 (e.g. - the set of providers and subscribers is <emphasis>entirely
 identical</emphasis> to that for the set it is to merge with), the
-sets may be merged together using <command><link
-linkend="stmtmergeset">MERGE SET</link></command>.</para>
+sets may be merged together using <xref
+linkend="stmtmergeset">.</para>
 
 <para>Up to and including 1.0.2, there was a potential problem where
-if <command><link linkend="stmtmergeset">MERGE SET</link></command> is
-issued while other subscription-related events are pending, it is
-possible for things to get pretty confused on the nodes where other
-things were pending.  This problem was resolved in 1.0.5.</para>
+if <xref linkend="stmtmergeset"> is issued while other
+subscription-related events are pending, it is possible for things to
+get pretty confused on the nodes where other things were pending.
+This problem was resolved in 1.0.5.</para>
 
-<para> Note that if you add nodes, you will need to add both <link
-   linkend="stmtstorepath">STORE PATH</link> statements to indicate how
-nodes communicate with one another, and <link
-   linkend="stmtstorelisten">STORE LISTEN</link> statements to
+<para> Note that if you add nodes, you will need to add both <xref
+linkend="stmtstorepath"> statements to indicate how nodes communicate
+with one another, and <xref linkend="stmtstorelisten"> statements to
 configuration the <quote>communications network</quote> that results
-from that.  See the section on <link linkend="listenpaths"> Listen
-Paths </link> for more details on the latter.</para>
+from that.  See <xref linkend="listenpaths"> for more details on the
+latter.</para>
 
 <para>It is suggested that you be very deliberate when adding such
 things.  For instance, submitting multiple subscription requests for a
-particular set in one <link linkend="slonik"> slonik </link> script
-often turns out quite badly.  If it is <emphasis>truly</emphasis>
-necessary to automate this, you'll probably want to submit <command>
-<link linkend="stmtwaitevent">WAIT FOR EVENT</link></command>
-requests in between subscription requests in order that the <link
-   linkend="slonik">slonik</link> script wait for one subscription to
-complete processing before requesting the next one.</para>
+particular set in one <xref linkend="slonik"> script often turns out
+quite badly.  If it is <emphasis>truly</emphasis> necessary to
+automate this, you'll probably want to submit <xref
+linkend="stmtwaitevent"> requests in between subscription requests in
+order that the <xref linkend="slonik"> script wait for one
+subscription to complete processing before requesting the next
+one.</para>
 
 <para>But in general, it is likely to be easier to cope with complex
 node reconfigurations by making sure that one change has been
Index: intro.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v
retrieving revision 1.10
retrieving revision 1.11
diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.10 -r1.11
--- doc/adminguide/intro.sgml
+++ doc/adminguide/intro.sgml
@@ -20,9 +20,8 @@
 few dozen servers.  If the number of servers grows beyond that, the
 cost of communications becomes prohibitively high.</para>
 
-<para> See also <link linkend="slonylistenercosts"> SlonyListenerCosts
-</link> for a further analysis of costs associated with having many
-nodes.</para>
+<para> See also <xref linkend="slonylistenercosts"> for a further
+analysis of costs associated with having many nodes.</para>
 
 <para> &slony1; is a system intended for data centers and backup
 sites, where the normal mode of operation is that all nodes are
@@ -119,9 +118,9 @@
 
 <para>There is a capability for &slony1; to propagate DDL changes if
 you submit them as scripts via the <application>slonik</application>
-<command> <link linkend="stmtddlscript"> EXECUTE SCRIPT
-</link></command> operation.  That is not <quote>automatic;</quote>
-you have to construct an SQL DDL script and submit it.</para>
+<xref linkend="stmtddlscript"> operation.  That is not
+<quote>automatic;</quote> you have to construct an SQL DDL script and
+submit it.</para>
 
 <para>If you have those sorts of requirements, it may be worth
 exploring the use of &postgres; 8.0 <acronym>PITR</acronym> (Point In
@@ -155,17 +154,18 @@
 
 <itemizedlist>
 
-<listitem><para> It is necessary to have a <envar>sl_path</envar>
-entry allowing connection from each node to every other node.  Most
-will normally not need to be used for a given replication
-configuration, but this means that there needs to be n(n-1) paths.  It
-is probable that there will be considerable repetition of entries,
-since the path to <quote>node n</quote> is likely to be the same from
-everywhere throughout the network.</para></listitem>
-
-<listitem><para> It is similarly necessary to have a
-<envar>sl_listen</envar> entry indicating how data flows from every
-node to every other node.  This again requires configuring n(n-1)
+<listitem><para> It is necessary to have a <xref linkend=
+"table.sl-path"> entry allowing connection from each node to every
+other node.  Most will normally not need to be used for a given
+replication configuration, but this means that there needs to be
+n(n-1) paths.  It is probable that there will be considerable
+repetition of entries, since the path to <quote>node n</quote> is
+likely to be the same from everywhere throughout the
+network.</para></listitem>
+
+<listitem><para> It is similarly necessary to have a <xref linkend=
+"table.sl-listen"> entry indicating how data flows from every node to
+every other node.  This again requires configuring n(n-1)
 <quote>listener paths.</quote></para></listitem>
 
 <listitem><para> Each SYNC applied needs to be reported back to all of
Index: ddlchanges.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/ddlchanges.sgml
+++ doc/adminguide/ddlchanges.sgml
@@ -8,9 +8,9 @@
 get rather deranged because they disagree on how particular tables are
 built.</para>
 
-<para>If you pass the changes through &slony1; via the <command><link
-linkend="stmtddlscript">EXECUTE SCRIPT</link></command> (slonik)
-/<function>ddlscript(set,script,node)</function> (stored function),
+<para>If you pass the changes through &slony1; via <xref
+linkend="stmtddlscript"> (slonik) /<xref
+linkend="function.ddlscript-integer-text-integer"> (stored function),
 this allows you to be certain that the changes take effect at the same
 point in the transaction streams on all of the nodes.  That may not be
 so important if you can take something of an outage to do schema
@@ -26,8 +26,7 @@
 subscriber nodes.  </para>
 
 <para>It's worth making a couple of comments on <quote>special
-things</quote> about <command><link linkend="stmtddlscript">EXECUTE
-SCRIPT</link></command>:</para>
+things</quote> about <xref linkend="stmtddlscript">:</para>
 
 <itemizedlist>
 
@@ -41,13 +40,13 @@
 
 <listitem><para>If there is <emphasis>anything</emphasis> broken about
 the script, or about how it executes on a particular node, this will
-cause the <link linkend="slon"><application>slon</application></link>
-daemon for that node to panic and crash. If you restart the node, it
-will, more likely than not, try to <emphasis>repeat</emphasis> the DDL
-script, which will, almost certainly, fail the second time just as it
-did the first time.  I have found this scenario to lead to a need to
-go to the <quote>master</quote> node to delete the event to stop it
-from continuing to fail.</para></listitem>
+cause the <xref linkend="slon"> daemon for that node to panic and
+crash. If you restart the node, it will, more likely than not, try to
+<emphasis>repeat</emphasis> the DDL script, which will, almost
+certainly, fail the second time just as it did the first time.  I have
+found this scenario to lead to a need to go to the
+<quote>master</quote> node to delete the event to stop it from
+continuing to fail.</para></listitem>
 
 <listitem><para> For <application>slon</application> to, at that
 point, <quote>panic</quote> is probably the
@@ -60,9 +59,10 @@
 risk of there being updates made that depended on the DDL changes in
 order to be correct.</para></listitem>
 
-<listitem><para> When you run <command><link linkend="stmtddlscript">EXECUTE SCRIPT</link></command>, this causes
-the <application>slonik</application> to request, <emphasis>for each
-table in the specified set</emphasis>, an exclusive table lock.</para>
+<listitem><para> When you run <xref linkend="stmtddlscript">, this
+causes the <application>slonik</application> to request, <emphasis>for
+each table in the specified set</emphasis>, an exclusive table
+lock.</para>
 
 <para> It starts by requesting the lock, and altering the table to
 remove &slony1; triggers:
Index: failover.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/failover.sgml
+++ doc/adminguide/failover.sgml
@@ -23,9 +23,8 @@
 origin transfer.</para>
 
 <para> It is assumed in this document that the reader is familiar with
-the <link linkend="slonik"> <application>slonik</application> </link>
-utility and knows at least how to set up a simple 2 node replication
-system with &slony1;.</para></sect2>
+the <xref linkend="slonik"> utility and knows at least how to set up a
+simple 2 node replication system with &slony1;.</para></sect2>
 
 <sect2><title> Controlled Switchover</title>
 
@@ -33,8 +32,8 @@
 <quote>subscriber</quote> as node2 (<emphasis>e.g.</emphasis> -
 slave).  A web application on a third server is accessing the database
 on node1.  Both databases are up and running and replication is more
-or less in sync.  We do controlled switchover using <command> <link
-     linkend="stmtmoveset"> MOVE SET </link> </command>.
+or less in sync.  We do controlled switchover using <xref
+linkend="stmtmoveset">.
 
 <itemizedlist>
 
@@ -44,8 +43,8 @@
 Users who use <application>pg_pool</application> for the applications database
 connections merely have to shut down the pool.</para></listitem>
 
-<listitem><para> A small <link linkend="slonik"> Slonik </link> script
-executes the following commands:
+<listitem><para> A small <xref linkend="slonik"> script executes the
+following commands:
 
 <programlisting>
 lock set (id = 1, origin = 1);
@@ -72,10 +71,10 @@
 </itemizedlist></para>
 
 <para> You may now simply shutdown the server hosting node1 and do
-whatever is required to maintain the server.  When <application><link
-     linkend="slon">slon</link></application> node1 is restarted later,
-it will start replicating again, and soon catch up.  At this point the
-procedure to switch origins is executed again to restore the original
+whatever is required to maintain the server.  When <xref
+linkend="slon"> node1 is restarted later, it will start replicating
+again, and soon catch up.  At this point the procedure to switch
+origins is executed again to restore the original
 configuration.</para>
 
 <para> This is the preferred way to handle things; it runs quickly,
@@ -86,16 +85,16 @@
 <sect2><title> Failover</title>
 
 <para> If some more serious problem occurs on the
-<quote>origin</quote> server, it may be necessary to <command><link
-     linkend="stmtfailover">FAILOVER</link></command> to a backup
-server.  This is a highly undesirable circumstance, as transactions
-<quote>committed</quote> on the origin, but not applied to the
-subscribers, will be lost.  You may have reported these transactions
-as <quote>successful</quote> to outside users.  As a result, failover
-should be considered a <emphasis>last resort</emphasis>.  If the
-<quote>injured</quote> origin server can be brought up to the point
-where it can limp along long enough to do a controlled switchover,
-that is <emphasis>greatly</emphasis> preferable.</para>
+<quote>origin</quote> server, it may be necessary to <xref
+linkend="stmtfailover"> to a backup server.  This is a highly
+undesirable circumstance, as transactions <quote>committed</quote> on
+the origin, but not applied to the subscribers, will be lost.  You may
+have reported these transactions as <quote>successful</quote> to
+outside users.  As a result, failover should be considered a
+<emphasis>last resort</emphasis>.  If the <quote>injured</quote>
+origin server can be brought up to the point where it can limp along
+long enough to do a controlled switchover, that is
+<emphasis>greatly</emphasis> preferable.</para>
 
 <para> &slony1; does not provide any automatic detection for failed
 systems.  Abandoning committed transactions is a business decision
@@ -107,7 +106,7 @@
 <itemizedlist>
 
 <listitem>
-<para>The <link linkend="slonik"><application>slonik</application></link> command
+<para>The <xref linkend="slonik"> command
 <programlisting>
 failover (id = 1, backup node = 2);
 </programlisting>
@@ -139,8 +138,8 @@
 <listitem>
 <para> After the failover is complete and node2 accepts write
 operations against the tables, remove all remnants of node1's
-configuration information with the <command><link
-       linkend="stmtdropnode">DROP NODE</link></command> command:
+configuration information with the <xref linkend="stmtdropnode">
+command:
 
 <programlisting>
 drop node (id = 1, event node = 2);
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.13
retrieving revision 1.14
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.13 -r1.14
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -112,7 +112,9 @@
 this will find them.</para>
 
 <para> If you have broken applications that hold connections open,
-that has several unsalutory effects as <link linkend="longtxnsareevil"> described in the FAQ</link>.</para></listitem>
+that has several unsalutory effects as <link
+linkend="longtxnsareevil"> described in the
+FAQ</link>.</para></listitem>
 
 </itemizedlist></para>
 
Index: slonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/slonik.sgml -Ldoc/adminguide/slonik.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/slonik.sgml
+++ doc/adminguide/slonik.sgml
@@ -1,5 +1,5 @@
 <!-- $Id$ -->
-<refentry id="app-slonik">
+<refentry id="slonik">
 <refmeta>
     <refentrytitle id="app-slonik-title"><application>slonik</application></refentrytitle>
     <manvolnum>1</manvolnum>
@@ -7,13 +7,13 @@
   </refmeta>
 
   <refnamediv>
-    <refname><application id="slonik">slonik</application></refname>
+    <refname><application>slonik</application></refname>
     <refpurpose>
-      <productname>Slony-I</productname> command processor
+      &slony1; command processor
     </refpurpose>
   </refnamediv>
 
- <indexterm zone="app-slonik">
+ <indexterm zone="slonik">
   <primary>slonik</primary>
  </indexterm>
 
@@ -30,7 +30,7 @@
     <para>
      <application>slonik</application> is the command processor
      application that is used to set up and modify configurations of
-     <productname>Slony-I</productname> replication clusters.
+     &slony1; replication clusters.
     </para>
  </refsect1>
 
@@ -65,8 +65,7 @@
   these sorts of scripting languages already have perfectly good ways
   of managing variables, doing iteration, and such.</para>
   
-  <para>See also <link linkend="slonikref"> Slonik Command Summary
-  </link>. </para>
+  <para>See also <xref linkend="slonikref">. </para>
 
  </refsect1>
 
Index: plainpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/plainpaths.sgml,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ldoc/adminguide/plainpaths.sgml -Ldoc/adminguide/plainpaths.sgml -u -w -r1.3 -r1.4
--- doc/adminguide/plainpaths.sgml
+++ doc/adminguide/plainpaths.sgml
@@ -6,16 +6,15 @@
 
 <itemizedlist>
 
-<listitem><para> <link linkend="admconninfo"> <command> ADMIN CONNINFO
-</command> </link> - controlling how a <link linkend="slonik"> slonik
-</link> script accesses the various nodes.
+<listitem><para> <xref linkend="admconninfo"> - controlling how a
+<xref linkend="slonik"> script accesses the various nodes.
 
 <para> These connections are the ones that go from your
 <quote/administrative workstation/ to all of the nodes in a &slony1;
 cluster.
 
 <para> It is <emphasis/vital/ that you have connections from the
-central location where you run <link linkend="slonik"> slonik </link>
+central location where you run <xref linkend="slonik"> 
 to each and every node in the network.  These connections are only
 used briefly, to submit the few <acronym/SQL/ requests required to
 control the administration of the cluster.
@@ -24,16 +23,14 @@
 be quite reasonable to <quote>hack together</quote> temporary
 connections using <link linkend="tunnelling">SSH tunnelling</link>.
 
-<listitem><para> <link linkend="stmtstorepath"> <command> STORE PATH
-</command> </link> - controlling how <link linkend="slon"> slon
-</link> daemons communicate with remote nodes.  These paths are stored
-in <link linkend="table.sl-path"> <envar>sl_path</envar> </link>. 
+<listitem><para> <xref linkend="stmtstorepath"> - controlling how
+<xref linkend="slon"> daemons communicate with remote nodes.  These
+paths are stored in <xref linkend="table.sl-path">.
 
 <para> You forcibly <emphasis>need</emphasis> to have a path between
 each subscriber node and its provider; other paths are optional, and
-will not be used unless a listen path in <link
-linkend="table.sl-listen"> <envar>sl_listen</envar> </link>. is needed
-that uses that particular path.
+will not be used unless a listen path in <xref
+linkend="table.sl-listen">. is needed that uses that particular path.
 
 </itemizedlist>
 
Index: dropthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/dropthings.sgml
+++ doc/adminguide/dropthings.sgml
@@ -6,16 +6,14 @@
 
 <sect2><title>Dropping A Whole Node</title>
 
-<para>If you wish to drop an entire node from replication, the <link
-    linkend="slonik">slonik</link> command <command><link
-     linkend="stmtdropnode">DROP NODE</link></command> should do the
+<para>If you wish to drop an entire node from replication, the <xref
+linkend="slonik"> command <xref linkend="stmtdropnode"> should do the
 trick.</para>
 
 <para>This will lead to &slony1; dropping the triggers (generally that
 deny the ability to update data), restoring the <quote>native</quote>
-triggers, dropping the schema used by &slony1;, and the <link
-    linkend="slon"> <command>slon</command> </link> process for that node
-terminating itself.</para>
+triggers, dropping the schema used by &slony1;, and the <xref
+linkend="slon"> process for that node terminating itself.</para>
 
 <para>As a result, the database should be available for whatever use
 your application makes of the database.</para>
@@ -34,39 +32,35 @@
 <sect2><title>Dropping An Entire Set</title>
 
 <para>If you wish to stop replicating a particular replication set,
-the <link linkend="slonik">slonik</link> command <command><link
-     linkend="stmtdropset">DROP SET</link></command> is what you need to
-use.</para>
-
-<para>Much as with <command><link linkend="stmtdropnode">DROP NODE
-</link></command>, this leads to &slony1; dropping the &slony1;
-triggers on the tables and restoring <quote>native</quote> triggers.
-One difference is that this takes place on <emphasis>all</emphasis>
-nodes in the cluster, rather than on just one node.  Another
-difference is that this does not clear out the &slony1; cluster's
-namespace, as there might be other sets being serviced.</para>
-
-<para>This operation is quite a bit more dangerous than <command>
-<link linkend="stmtdropnode">DROP NODE</link></command>, as there
-<emphasis>isn't</emphasis> the same sort of <quote>failsafe.</quote>
-If you tell <command><link linkend="stmtdropset">DROP
-SET</link></command> to drop the <emphasis>wrong</emphasis> set, there
-isn't anything to prevent potentially career-limiting
+the <xref linkend="slonik"> command <xref linkend="stmtdropset"> is
+what you need to use.</para>
+
+<para>Much as with <xref linkend="stmtdropnode">, this leads to
+&slony1; dropping the &slony1; triggers on the tables and restoring
+<quote>native</quote> triggers.  One difference is that this takes
+place on <emphasis>all</emphasis> nodes in the cluster, rather than on
+just one node.  Another difference is that this does not clear out the
+&slony1; cluster's namespace, as there might be other sets being
+serviced.</para>
+
+<para>This operation is quite a bit more dangerous than <xref
+linkend="stmtdropnode">, as there <emphasis>isn't</emphasis> the same
+sort of <quote>failsafe.</quote> If you tell <xref
+linkend="stmtdropset"> to drop the <emphasis>wrong</emphasis> set,
+there isn't anything to prevent potentially career-limiting
 <quote>unfortunate results.</quote> Handle with care...</para>
 </sect2>
 
 <sect2><title>Unsubscribing One Node From One Set</title>
 
-<para>The <command><link linkend="stmtunsubscribeset">UNSUBSCRIBE
-SET</link></command> operation is a little less invasive than either
-<command><link linkend="stmtdropset">DROP SET</link></command> or
-<command><link linkend="stmtdropnode">DROP NODE</link></command>; it
-involves dropping &slony1; triggers and restoring
-<quote>native</quote> triggers on one node, for one replication
-set.</para>
+<para>The <xref linkend="stmtunsubscribeset"> operation is a little
+less invasive than either <xref linkend="stmtdropset"> or <xref
+linkend="stmtdropnode">; it involves dropping &slony1; triggers and
+restoring <quote>native</quote> triggers on one node, for one
+replication set.</para>
 
-<para>Much like with <command><link linkend="stmtdropnode">DROP NODE</link></command>, 
-this operation will fail if there is a node subscribing to the set on this node.
+<para>Much like with <xref linkend="stmtdropnode">, this operation
+will fail if there is a node subscribing to the set on this node.
 
 <warning>
 <para>For all of the above operations, <quote>turning replication back
@@ -80,9 +74,8 @@
 </sect2>
 <sect2><title> Dropping A Table From A Set</title>
 
-<para>In &slony1; 1.0.5 and above, there is a Slonik command
-<command><link linkend="stmtsetdroptable">SET DROP
-TABLE</link></command> that allows dropping a single table from
+<para>In &slony1; 1.0.5 and above, there is a Slonik command <xref
+linkend="stmtsetdroptable"> that allows dropping a single table from
 replication without forcing the user to drop the entire replication
 set.</para>
 
@@ -105,20 +98,17 @@
 
 <para>You'll have to run these three queries on all of the nodes,
 preferably firstly on the origin node, so that the dropping of this
-propagates properly.  Implementing this via a <link linkend="slonik">slonik</link> statement with a new &slony1; event
-would do that.  Submitting the three queries using <command><link
-     linkend="stmtddlscript">EXECUTE SCRIPT</link></command> could do that;
-see <link linkend="ddlchanges">Database Schema Changes</link> for more
-details.  Also possible would be to connect to each database and
-submit the queries by hand.</para>
+propagates properly.  Implementing this via a <xref linkend="slonik">
+statement with a new &slony1; event would do that.  Submitting the
+three queries using <xref linkend="stmtddlscript"> could do that; see
+<xref linkend="ddlchanges"> for more details.  Also possible would be
+to connect to each database and submit the queries by hand.</para>
 </sect2>
 
 <sect2><title>Dropping A Sequence From A Set</title>
 
-<para>Just as with <command><link linkend="stmtsetdroptable">SET
-DROP TABLE</link> </command>, version 1.0.5 introduces the operation
-<command><link linkend="stmtsetdropsequence">SET DROP
-SEQUENCE</link></command>.</para>
+<para>Just as with <xref linkend="stmtsetdroptable">, version 1.0.5
+introduces the operation <xref linkend="stmtsetdropsequence">.</para>
 
 <para>If you are running an earlier version, here are instructions as
 to how to drop sequences:</para>
@@ -134,10 +124,10 @@
 </para>
 
 <para> Those two queries could be submitted to all of the nodes via
-<function>ddlscript()</function> / <command><link
-     linkend="stmtddlscript">EXECUTE SCRIPT</link></command>, thus
-eliminating the sequence everywhere <quote>at once.</quote> Or they
-may be applied by hand to each of the nodes.</para>
+<xref linkend="function.ddlscript-integer-text-integer"> / <xref
+linkend="stmtddlscript">, thus eliminating the sequence everywhere
+<quote>at once.</quote> Or they may be applied by hand to each of the
+nodes.</para>
 </sect2>
 </sect1>
 <!-- Keep this comment at the end of the file
Index: listenpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/listenpaths.sgml -Ldoc/adminguide/listenpaths.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/listenpaths.sgml
+++ doc/adminguide/listenpaths.sgml
@@ -11,10 +11,9 @@
 usage of cascaded subscribers (<emphasis>e.g.</emphasis> - subscribers
 that are subscribing through a subscriber node), you will have to be
 fairly careful about the configuration of <quote>listen paths</quote>
-via the Slonik <command> <link linkend="stmtstorelisten">STORE
-LISTEN</link></command> and <link
-linkend="stmtdroplisten"><command>DROP LISTEN</command></link>
-statements that control the contents of the table sl_listen.</para>
+via the Slonik <xref linkend="stmtstorelisten"> and <xref
+linkend="stmtdroplisten"> statements that control the contents of the
+table sl_listen.</para>
 
 <para>The <quote>listener</quote> entries in this table control where
 each node expects to listen in order to get events propagated from
@@ -33,10 +32,10 @@
 <para>On one occasion, I had a need to drop a subscriber node (#2) and
 recreate it.  That node was the data provider for another subscriber
 (#3) that was, in effect, a <quote>cascaded slave.</quote> Dropping
-the subscriber node initially didn't work, as <link linkend="slonik"><command>slonik</command></link> informed me that
-there was a dependant node.  I repointed the dependant node to the
-<quote>master</quote> node for the subscription set, which, for a
-while, replicated without difficulties.</para>
+the subscriber node initially didn't work, as <xref linkend="slonik">
+informed me that there was a dependant node.  I repointed the
+dependant node to the <quote>master</quote> node for the subscription
+set, which, for a while, replicated without difficulties.</para>
 
 <para>I then dropped the subscription on <quote>node 2</quote>, and
 started resubscribing it.  that raised the &slony1;
@@ -129,8 +128,7 @@
 <para>The tool <filename>init_cluster.pl</filename> in the
 <filename>altperl</filename> scripts produces optimized listener
 networks in both the tabular form shown above as well as in the form
-of <link linkend="slonik"> <application>slonik</application></link>
-statements.</para>
+of <xref linkend="slonik"> statements.</para>
 
 <para>There are three <quote>thorns</quote> in this set of roses:
 
@@ -197,10 +195,9 @@
 the listener paths.</para>
 
 <para> If you are running an earlier version of &slony1;, you may want
-to take a look at <link linkend="regenlisten"><application>regenerate-listens.pl</application></link>,
-a Perl script which duplicates the functionality of the stored
-procedure in the form of a script that generates the <link
-    linkend="slonik"><command>slonik</command> </link> requests to
+to take a look at <xref linkend="regenlisten">, a Perl script which
+duplicates the functionality of the stored procedure in the form of a
+script that generates the <xref linkend="slonik"> requests to
 generate the listener paths.</para></sect2>
 
 </sect1>


More information about the Slony1-commit mailing list