CVS User Account cvsuser
Wed Feb 23 23:19:05 PST 2005
Log Message:
-----------
Changed all table references to be <XREF> references to the DB schema
tables.

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        defineset.sgml (r1.12 -> r1.13)
        dropthings.sgml (r1.12 -> r1.13)
        faq.sgml (r1.23 -> r1.24)
        intro.sgml (r1.11 -> r1.12)
        listenpaths.sgml (r1.14 -> r1.15)
        maintenance.sgml (r1.13 -> r1.14)
        monitoring.sgml (r1.15 -> r1.16)
        slonik_ref.sgml (r1.14 -> r1.15)
        usingslonik.sgml (r1.5 -> r1.6)

-------------- next part --------------
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -123,8 +123,8 @@
 
 <para> Each time a SYNC is processed, values are recorded for
 <emphasis>all</emphasis> of the sequences in the set.  If there are a
-lot of sequences, this can cause <envar>sl_seqlog</envar> to grow
-rather large.</para>
+lot of sequences, this can cause <xref linkend="table.sl-seqlog"> to
+grow rather large.</para>
 
 <para> This points to an important difference between tables and
 sequences: if you add additional tables that do not see much/any
@@ -139,18 +139,18 @@
 introduce much work to the system.</para>
 
 <para> If it is not updated, the trigger on the table on the origin
-never fires, and no entries are added to <envar>sl_log_1</envar>.  The
-table never appears in any of the further replication queries
-(<emphasis>e.g.</emphasis> in the <command>FETCH 100 FROM
-LOG</command> queries used to find replicatable data) as they only
-look for tables for which there are entries in
-<envar>sl_log_1</envar>.</para></listitem>
+never fires, and no entries are added to <xref
+linkend="table.sl-log-1">.  The table never appears in any of the
+further replication queries (<emphasis>e.g.</emphasis> in the
+<command>FETCH 100 FROM LOG</command> queries used to find
+replicatable data) as they only look for tables for which there are
+entries in <xref linkend="table.sl-log-1">.</para></listitem>
 
 <listitem><para> In contrast, a fixed amount of work is introduced to
 each SYNC by each sequence that is replicated.</para>
 
-<para> Replicate 300 sequence and 300 rows need to be added to
-<envar>sl_seqlog</envar> on a regular basis.</para>
+<para> Replicate 300 sequence and 300 rows need to be added to <xref
+linkend="table.sl-seqlog"> on a regular basis.</para>
 
 <para> It is more than likely that if the value of a particular
 sequence hasn't changed since it was last checked, perhaps the same
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.14
retrieving revision 1.15
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.14 -r1.15
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -299,8 +299,10 @@
        <listitem><para>The unique, numeric ID number of the node.</para></listitem>
       </varlistentry>
       
-      <varlistentry><term><literal>COMMENT = 'comment text'</literal></term>
-       <listitem><para> A descriptive text added to the node entry in the table sl_node.</para></listitem>
+      <varlistentry><term><literal>COMMENT = 'comment
+      text'</literal></term> <listitem><para> A descriptive text added
+      to the node entry in the table <xref linkend="table.sl-node">.
+      </para></listitem>
       </varlistentry>
      </variablelist>
      
@@ -365,7 +367,7 @@
       </varlistentry>
       
       <varlistentry><term><literal> COMMENT = 'description' </literal></term>
-       <listitem><para> A descriptive text added to the node entry in the table sl_node.</para></listitem>
+       <listitem><para> A descriptive text added to the node entry in the table <xref linkend="table.sl-node"></para></listitem>
       </varlistentry>
       
       <varlistentry><term><literal> SPOOLNODE = boolean </literal></term>
@@ -1910,7 +1912,7 @@
        
       </varlistentry>
       <varlistentry><term><literal> WAIT ON = ival </literal></term>
-       <listitem><para> The ID of the node where the sl_confirm table
+       <listitem><para> The ID of the node where the <xref linkend="table.sl-confirm"> table
 	 is to be checked.  The default value is 1.</para></listitem>
        
       </varlistentry>
Index: intro.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v
retrieving revision 1.11
retrieving revision 1.12
diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.11 -r1.12
--- doc/adminguide/intro.sgml
+++ doc/adminguide/intro.sgml
@@ -154,24 +154,15 @@
 
 <itemizedlist>
 
-<listitem><para> It is necessary to have a <xref linkend=
-"table.sl-path"> entry allowing connection from each node to every
-other node.  Most will normally not need to be used for a given
-replication configuration, but this means that there needs to be
-n(n-1) paths.  It is probable that there will be considerable
-repetition of entries, since the path to <quote>node n</quote> is
-likely to be the same from everywhere throughout the
-network.</para></listitem>
-
-<listitem><para> It is similarly necessary to have a <xref linkend=
-"table.sl-listen"> entry indicating how data flows from every node to
-every other node.  This again requires configuring n(n-1)
-<quote>listener paths.</quote></para></listitem>
+<listitem><para> It is necessary to have <xref linkend=
+"table.sl-listen"> entries allowing connection from each node to every
+other node.  Most will normally not need to be very heavily, but it
+still means that there needs to be n(n-1) paths.  </para></listitem>
 
 <listitem><para> Each SYNC applied needs to be reported back to all of
 the other nodes participating in the set so that the nodes all know
-that it is safe to purge <envar>sl_log_1</envar> and
-<envar>sl_log_2</envar> data, as any <quote>forwarding</quote> node
+that it is safe to purge <xref linkend="table.sl-log-1"> and <xref
+linkend="table.sl-log-2"> data, as any <quote>forwarding</quote> node
 could potentially take over as <quote>master</quote> at any time.  One
 might expect SYNC messages to need to travel through n/2 nodes to get
 propagated to their destinations; this means that each SYNC is
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.13
retrieving revision 1.14
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.13 -r1.14
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -8,8 +8,9 @@
 
 <listitem><para> Deletes old data from various tables in the
 <productname>Slony-I</productname> cluster's namespace, notably
-entries in <envar>sl_log_1</envar>, <envar>sl_log_2</envar> (not yet
-used), and <envar>sl_seqlog</envar>.</para></listitem>
+entries in <xref linkend="table.sl-log-1">, <xref
+linkend="table.sl-log-2"> (not yet used), and <xref
+linkend="table.sl-seqlog">.</para></listitem>
 
 <listitem><para> Vacuum certain tables used by &slony1;.  As of 1.0.5,
 this includes pg_listener; in earlier versions, you must vacuum that
@@ -83,8 +84,8 @@
 
 <listitem><para> <command>test_slony_replication</command> is a
 Perl script to which you can pass connection information to get to a
-&slony1; node.  It then queries <envar>sl_path</envar> and other
-information on that node in order to determine the shape of the
+&slony1; node.  It then queries <xref linkend="table.sl-path"> and
+other information on that node in order to determine the shape of the
 requested replication set.</para>
 
 <para> It then injects some test queries to a test table called
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.15
retrieving revision 1.16
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.15 -r1.16
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -73,16 +73,15 @@
 <option>cluster</option>, <option>password</option>, and
 <option>port</option> to connect to any of the nodes on a cluster.</para>
 
-<para> The script then rummages through <envar>sl_path</envar> to find
-all of the nodes in the cluster, and the DSNs to allow it to, in turn,
-connect to each of them.</para>
+<para> The script then rummages through <xref linkend="table.sl-path">
+to find all of the nodes in the cluster, and the DSNs to allow it to,
+in turn, connect to each of them.</para>
 
 <para> For each node, the script examines the state of things,
 including such things as:
 
 <itemizedlist>
-<listitem><para> Checking  <link
-linkend="table.sl-listen"> <envar>sl_listen</envar></link> for some
+<listitem><para> Checking <xref linkend="table.sl-listen"> for some
 <quote>analytically determinable</quote> problems.  It lists paths
 that are not covered.</para></listitem>  
 
@@ -91,14 +90,13 @@
 <para> If a node hasn't submitted any events in a while, that likely
 suggests a problem.</para></listitem>
 
-<listitem><para> Summarizes the <quote>aging</quote> of table <link
-linkend="table.sl-confirm"> <envar>sl_confirm</envar></link> </para>
+<listitem><para> Summarizes the <quote>aging</quote> of table <xref
+linkend="table.sl-confirm"> </para>
 
 <para> If one or another of the nodes in the cluster hasn't reported
-back recently, that tends to lead to cleanups of tables like <link
-linkend="table.sl-log-1"> <envar>sl_log_1</envar></link> and <link
-linkend="table.sl-seqlog"> <envar>sl_seqlog</envar></link> not taking
-place.</para></listitem>
+back recently, that tends to lead to cleanups of tables like <xref
+linkend="table.sl-log-1"> and <xref linkend="table.sl-seqlog"> not
+taking place.</para></listitem>
 
 <listitem><para> Summarizes what transactions have been running for a
 long time</para>
Index: dropthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v
retrieving revision 1.12
retrieving revision 1.13
diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.12 -r1.13
--- doc/adminguide/dropthings.sgml
+++ doc/adminguide/dropthings.sgml
@@ -25,9 +25,10 @@
 the node that you attempt to drop, so there is a bit of a failsafe to
 protect you from errors.</para>
 
-<para><link linkend="faq17">sl_log_1 isn't getting purged</link>
-documents some extra maintenance that may need to be done on
-sl_confirm if you are running versions prior to 1.0.5.</para></sect2>
+<para><link linkend="faq17"> <envar>sl_log_1</envar> isn't getting
+purged</link> documents some extra maintenance that may need to be
+done on <xref linkend="table.sl-confirm"> if you are running versions
+prior to 1.0.5.</para></sect2>
 
 <sect2><title>Dropping An Entire Set</title>
 
@@ -83,8 +84,9 @@
 to do this:</para>
 
 <para>You can fiddle this by hand by finding the table ID for the
-table you want to get rid of, which you can find in sl_table, and then
-run the following three queries, on each host:
+table you want to get rid of, which you can find in <xref
+linkend="table.sl-table">, and then run the following three queries,
+on each host:
 <programlisting>
   select _slonyschema.alterTableRestore(40);
   select _slonyschema.tableDropKey(40);
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.23
retrieving revision 1.24
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.23 -r1.24
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -274,7 +274,7 @@
 <emphasis>essential functionality of <xref linkend="stmtsetdroptable">
 involves the functionality in <function>droptable_int()</function>.
 You can fiddle this by hand by finding the table ID for the table you
-want to get rid of, which you can find in sl_table, and then run the
+want to get rid of, which you can find in <xref linkend="table.sl-table">, and then run the
 following three queries, on each host:</emphasis>
 
 <programlisting>
@@ -403,8 +403,8 @@
 database, including the &slony1; ones, and</para></listitem>
 
 <listitem><para> A &slony1; sync event, which wants to grab a
-<command>AccessExclusiveLock</command> on the table
-<envar>sl_event</envar>.</para></listitem> </itemizedlist></para>
+<command>AccessExclusiveLock</command> on the table <xref
+linkend="table.sl-event">.</para></listitem> </itemizedlist></para>
 
 <para>The initial query that will be blocked is thus:
 
@@ -429,10 +429,11 @@
 
 <para>The <command>LOCK</command> statement will sit there and wait
 until <command>pg_dump</command> (or whatever else has pretty much any
-kind of access lock on <envar>sl_event</envar>) completes.</para>
+kind of access lock on <xref linkend="table.sl-event">)
+completes.</para>
 
 <para>Every subsequent query submitted that touches
-<envar>sl_event</envar> will block behind the
+<xref linkend="table.sl-event"> will block behind the
 <function>createEvent</function> call.</para>
 
 <para>There are a number of possible answers to this:
@@ -455,10 +456,11 @@
 commission [for some reason], and it's taking a long time to get a
 sync through.</para></question>
 
-<answer><para> You might want to take a look at the sl_log_1/sl_log_2
-tables, and do a summary to see if there are any really enormous
-&slony1; transactions in there.  Up until at least 1.0.2, there needs
-to be a slon connected to the origin in order for
+<answer><para> You might want to take a look at the <xref
+linkend="table.sl-log-1">/<xref linkend="table.sl-log-2"> tables, and
+do a summary to see if there are any really enormous &slony1;
+transactions in there.  Up until at least 1.0.2, there needs to be a
+<xref linkend="slon"> connected to the origin in order for
 <command>SYNC</command> events to be generated.</para>
 
 <para>If none are being generated, then all of the updates until the
@@ -496,7 +498,7 @@
 <para>Unfortunately, replication suddenly stopped to node 3.</para>
 
 <para>The problem was that there was not a suitable set of
-<quote>listener paths</quote> in sl_listen to allow the events from
+<quote>listener paths</quote> in <xref linkend="table.sl-listen"> to allow the events from
 node 1 to propagate to node 3.  The events were going through node 2,
 and blocking behind the <xref linkend="stmtsubscribeset"> event that
 node 2 was working on.</para>
@@ -548,23 +550,24 @@
 </qandaentry>
 
 <qandaentry id="faq17">
-<question><para>After dropping a node, sl_log_1 isn't getting purged
-out anymore.</para></question>
+<question><para>After dropping a node, <xref linkend="table.sl-log-1">
+isn't getting purged out anymore.</para></question>
 
 <answer><para> This is a common scenario in versions before 1.0.5, as
 the <quote>clean up</quote> that takes place when purging the node
-does not include purging out old entries from the
-&slony1; table, sl_confirm, for the recently
-departed node.</para>
+does not include purging out old entries from the &slony1; table,
+<xref linkend="table.sl-confirm">, for the recently departed
+node.</para>
 
 <para> The node is no longer around to update confirmations of what
 syncs have been applied on it, and therefore the cleanup thread that
 purges log entries thinks that it can't safely delete entries newer
-than the final sl_confirm entry, which rather curtails the ability to
-purge out old logs.</para>
+than the final <xref linkend="table.sl-confirm"> entry, which rather
+curtails the ability to purge out old logs.</para>
 
 <para>Diagnosis: Run the following query to see if there are any
-<quote>phantom/obsolete/blocking</quote> sl_confirm entries:
+<quote>phantom/obsolete/blocking</quote> <xref
+linkend="table.sl-confirm"> entries:
 
 <screen>
 oxrsbar=# select * from _oxrsbar.sl_confirm where con_origin not in (select no_id from _oxrsbar.sl_node) or con_received not in (select no_id from _oxrsbar.sl_node);
@@ -580,9 +583,9 @@
 </screen></para>
 
 <para>In version 1.0.5, the <xref linkend="stmtdropnode"> function
-purges out entries in sl_confirm for the departing node.  In earlier
-versions, this needs to be done manually.  Supposing the node number
-is 3, then the query would be:
+purges out entries in <xref linkend="table.sl-confirm"> for the
+departing node.  In earlier versions, this needs to be done manually.
+Supposing the node number is 3, then the query would be:
 
 <screen>
 delete from _namespace.sl_confirm where con_origin = 3 or con_received = 3;
@@ -596,7 +599,7 @@
 
 <para>General <quote>due diligence</quote> dictates starting with a
 <command>BEGIN</command>, looking at the contents of
-<envar>sl_confirm</envar> before, ensuring that only the expected
+<xref linkend="table.sl-confirm"> before, ensuring that only the expected
 records are purged, and then, only after that, confirming the change
 with a <command>COMMIT</command>.  If you delete confirm entries for
 the wrong node, that could ruin your whole day.</para>
@@ -604,15 +607,16 @@
 <para>You'll need to run this on each node that remains...</para>
 
 <para>Note that as of 1.0.5, this is no longer an issue at all, as it
-purges unneeded entries from sl_confirm in two places:
+purges unneeded entries from <xref linkend="table.sl-confirm"> in two
+places:
 
 <itemizedlist>
 <listitem><para> At the time a node is dropped</para></listitem>
 
-<listitem><para> At the start of each <function>cleanupEvent</function> run,
-which is the event in which old data is purged from sl_log_1 and
-sl_seqlog</para></listitem> 
-</itemizedlist></para>
+<listitem><para> At the start of each
+<function>cleanupEvent</function> run, which is the event in which old
+data is purged from <xref linkend="table.sl-log-1"> and <xref
+linkend="table.sl-seqlog"></para></listitem> </itemizedlist></para>
 </answer>
 </qandaentry>
 
@@ -657,21 +661,22 @@
 yet been arrived at.
 
 <para>By the time we notice that there is a problem, the seemingly
-missed delete transaction has been cleaned out of
-<envar>sl_log_1</envar>, so there appears to be no recovery possible.
-What has seemed necessary, at this point, is to drop the replication
-set (or even the node), and restart replication from scratch on that
-node.</para>
+missed delete transaction has been cleaned out of <xref
+linkend="table.sl-log-1">, so there appears to be no recovery
+possible.  What has seemed necessary, at this point, is to drop the
+replication set (or even the node), and restart replication from
+scratch on that node.</para>
 
-<para>In &slony1; 1.0.5, the handling of
-purges of sl_log_1 became more conservative, refusing to purge
+<para>In &slony1; 1.0.5, the handling of purges of <xref
+linkend="table.sl-log-1"> became more conservative, refusing to purge
 entries that haven't been successfully synced for at least 10 minutes
 on all nodes.  It was not certain that that will prevent the
-<quote>glitch</quote> from taking place, but it seems likely that it will
-leave enough sl_log_1 data to be able to do something about recovering
-from the condition or at least diagnosing it more exactly.  And
-perhaps the problem is that sl_log_1 was being purged too
-aggressively, and this will resolve the issue completely.</para>
+<quote>glitch</quote> from taking place, but it seemed likely that it
+might leave enough <xref linkend="table.sl-log-1"> data to be able to
+do something about recovering from the condition or at least
+diagnosing it more exactly.  And perhaps the problem is that <xref
+linkend="table.sl-log-1"> was being purged too aggressively, and this
+will resolve the issue completely.</para>
 </answer>
 
 <answer><para> Unfortunately, this problem has been observed in 1.0.5,
@@ -681,17 +686,17 @@
 for this; if you discover that this problem recurs, it may be an idea
 to break replication down into multiple sets in order to diminish the
 work involved in restarting replication.  If only one set has broken,
-you only unsubscribe/drop and resubscribe the one set.
+you may only need to unsubscribe/drop and resubscribe the one set.
 </para>
 
 <para> In one case we found two lines in the SQL error message in the
 log file that contained <emphasis> identical </emphasis> insertions
-into <envar> sl_log_1 </envar>.  This <emphasis> ought </emphasis> to
-be impossible as is a primary key on <envar>sl_log_1</envar>.  The
-latest punctured theory that comes from <emphasis>that</emphasis> was
-that perhaps this PK index has been corrupted (representing a
-<productname>PostgreSQL</productname> bug), and that perhaps the
-problem might be alleviated by running the query:
+into <xref linkend="table.sl-log-1">.  This <emphasis> ought
+</emphasis> to be impossible as is a primary key on <xref
+linkend="table.sl-log-1">.  The latest punctured theory that comes
+from <emphasis>that</emphasis> was that perhaps this PK index has been
+corrupted (representing a <productname>PostgreSQL</productname> bug),
+and that perhaps the problem might be alleviated by running the query:
 <programlisting>
 # reindex table _slonyschema.sl_log_1;
 </programlisting>
@@ -781,7 +786,7 @@
 table.</para>
 
 <para> That trigger initiates the action of logging all updates to the
-table to &slony1; <envar>sl_log</envar>
+table to &slony1; <xref linkend="table.sl-log-1">
 tables.</para></listitem>
 
 <listitem><para> On a subscriber node, this involves disabling
@@ -901,8 +906,8 @@
 
 <question><para> Replication has been slowing down, I'm seeing
 <command> FETCH 100 FROM LOG </command> queries running for a long
-time, <envar> sl_log_1 </envar> is growing, and performance is, well,
-generally getting steadily worse. </para>
+time, <xref linkend="table.sl-log-1"> is growing, and performance is,
+well, generally getting steadily worse. </para>
 </question>
 
 <answer> <para> There are actually a number of possible causes for
@@ -925,9 +930,10 @@
 idle transaction. </para> </listitem>
 
 <listitem><para> The cleanup thread will be unable to clean out
-entries in <envar> sl_log_1 </envar> and <envar> sl_seqlog </envar>,
-with the result that these tables will grow, ceaselessly, until the
-transaction is closed. </para> </listitem>
+entries in <xref linkend="table.sl-log-1"> and <xref
+linkend="table.sl-seqlog">, with the result that these tables will
+grow, ceaselessly, until the transaction is closed. </para>
+</listitem>
 </itemizedlist>
 </answer>
 
@@ -984,7 +990,7 @@
 the slonik <xref linkend="stmtddlscript"> command.
 
 <para>The solution is to rebuild the trigger on the affected table and
-fix the entries in <envar>sl_log_1 </envar> by hand.
+fix the entries in <xref linkend="table.sl-log-1"> by hand.
 
 <itemizedlist>
 
@@ -1003,10 +1009,11 @@
 COMMIT;
 </screen>
 
-<para>You then need to find the rows in <envar> sl_log_1 </envar> that
-have bad entries and fix them.  You may want to take down the slon
-daemons for all nodes except the master; that way, if you make a
-mistake, it won't immediately propagate through to the subscribers.
+<para>You then need to find the rows in <xref
+linkend="table.sl-log-1"> that have bad entries and fix them.  You may
+want to take down the slon daemons for all nodes except the master;
+that way, if you make a mistake, it won't immediately propagate
+through to the subscribers.
 
 <para> Here is an example:
 
Index: listenpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.sgml,v
retrieving revision 1.14
retrieving revision 1.15
diff -Ldoc/adminguide/listenpaths.sgml -Ldoc/adminguide/listenpaths.sgml -u -w -r1.14 -r1.15
--- doc/adminguide/listenpaths.sgml
+++ doc/adminguide/listenpaths.sgml
@@ -13,7 +13,7 @@
 fairly careful about the configuration of <quote>listen paths</quote>
 via the Slonik <xref linkend="stmtstorelisten"> and <xref
 linkend="stmtdroplisten"> statements that control the contents of the
-table sl_listen.</para>
+table <xref linkend="table.sl-listen">.</para>
 
 <para>The <quote>listener</quote> entries in this table control where
 each node expects to listen in order to get events propagated from
@@ -22,10 +22,11 @@
 reality, they need to be able to receive messages from
 <emphasis>all</emphasis> nodes in order to be able to conclude that
 <command>sync</command>s have been received everywhere, and that,
-therefore, entries in sl_log_1 and sl_log_2 have been applied
-everywhere, and can therefore be purged.  this extra communication is
-needful so <productname>Slony-I</productname> is able to shift origins
-to other locations.</para>
+therefore, entries in <xref linkend="table.sl-log-1"> and <xref
+linkend="table.sl-log-2"> have been applied everywhere, and can
+therefore be purged.  this extra communication is needful so
+<productname>Slony-I</productname> is able to shift origins to other
+locations.</para>
 
 <sect2><title>how listening can break</title>
 
@@ -135,13 +136,14 @@
 <itemizedlist>
 
 <listitem><para> If you change the shape of the node set, so that the
-nodes subscribe differently to things, you need to drop sl_listen
-entries and create new ones to indicate the new preferred paths
-between nodes.  Until &slony1;, there is no automated way at this
-point to do this <quote>reshaping</quote>.</para></listitem>
+nodes subscribe differently to things, you need to drop <xref
+linkend="table.sl-listen"> entries and create new ones to indicate the
+new preferred paths between nodes.  Until &slony1;, there is no
+automated way at this point to do this
+<quote>reshaping</quote>.</para></listitem>
 
 <listitem><para> If you <emphasis>don't</emphasis> change the
-sl_listen entries, events will likely continue to propagate so long as
+<xref linkend="table.sl-listen"> entries, events will likely continue to propagate so long as
 all of the nodes continue to run well.  the problem will only be
 noticed when a node is taken down, <quote>orphaning</quote> any nodes
 that are listening through it.</para></listitem>
@@ -151,15 +153,17 @@
 subscribers.  there won't be a single <quote>best</quote> listener
 configuration in that case.</para></listitem>
 
-<listitem><para> In order for there to be an sl_listen path, there
-<emphasis>must</emphasis> be a series of sl_path entries connecting
-the origin to the receiver.  this means that if the contents of
-sl_path do not express a <quote>connected</quote> network of nodes,
-then some nodes will not be reachable.  this would typically happen,
-in practice, when you have two sets of nodes, one in one subnet, and
-another in another subnet, where there are only a couple of
-<quote>firewall</quote> nodes that can talk between the subnets.  cut
-out those nodes and the subnets stop communicating.</para></listitem>
+<listitem><para> In order for there to be an <xref
+linkend="table.sl-listen"> path, there <emphasis>must</emphasis> be a
+series of <xref linkend="table.sl-path"> entries connecting the origin
+to the receiver.  this means that if the contents of <xref
+linkend="table.sl-path"> do not express a <quote>connected</quote>
+network of nodes, then some nodes will not be reachable.  this would
+typically happen, in practice, when you have two sets of nodes, one in
+one subnet, and another in another subnet, where there are only a
+couple of <quote>firewall</quote> nodes that can talk between the
+subnets.  cut out those nodes and the subnets stop
+communicating.</para></listitem>
 
 </itemizedlist></para>
 
@@ -173,15 +177,16 @@
 
 <itemizedlist>
 
-<listitem><para> sl_subscribe entries are the first, most vital
-control as to what listens to what; we <emphasis>know</emphasis> there
-must be a direct path between each subscriber node and its
-provider.</para></listitem>
-
-<listitem><para> sl_path entries are the second indicator; if
-sl_subscribe has not already indicated <quote>how to listen,</quote>
-then a node may listen directly to the event's origin if there is a
-suitable sl_path entry.</para></listitem>
+<listitem><para> <xref linkend="table.sl-subscribe"> entries are the
+first, most vital control as to what listens to what; we
+<emphasis>know</emphasis> there must be a direct path between each
+subscriber node and its provider.</para></listitem>
+
+<listitem><para> <xref linkend="table.sl-path"> entries are the second
+indicator; if <xref linkend="table.sl-subscribe"> has not already
+indicated <quote>how to listen,</quote> then a node may listen
+directly to the event's origin if there is a suitable <xref
+linkend="table.sl-path"> entry.</para></listitem>
 
 <listitem><para> Lastly, if there has been no guidance thus far based
 on the above data sources, then nodes can listen indirectly via every
@@ -190,7 +195,8 @@
 
 </itemizedlist></para>
 
-<para> Any time sl_subscribe or sl_path are modified,
+<para> Any time <xref linkend="table.sl-subscribe"> or <xref
+linkend="table.sl-path"> are modified,
 <function>RebuildListenEntries()</function> will be called to revise
 the listener paths.</para>
 
Index: usingslonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/usingslonik.sgml,v
retrieving revision 1.5
retrieving revision 1.6
diff -Ldoc/adminguide/usingslonik.sgml -Ldoc/adminguide/usingslonik.sgml -u -w -r1.5 -r1.6
--- doc/adminguide/usingslonik.sgml
+++ doc/adminguide/usingslonik.sgml
@@ -369,10 +369,10 @@
 
 <para> When debugging problems in <quote>troubled</quote> &slony1;
 clusters, it has also occasionally proven useful to use the stored
-functions.  This has been particularly useful for cases where
-<envar>sl_listen</envar> configuration has been broken, and events
-have not been propagating properly.  The <quote>easiest</quote> fix
-was to:</para>
+functions.  This has been particularly useful for cases where <xref
+linkend="table.sl-listen"> configuration has been broken, and
+events have not been propagating properly.  The <quote>easiest</quote>
+fix was to:</para>
 
 <para> <command> select
 _slonycluster.droplisten(li_origin,li_provider,li_receiver) from


More information about the Slony1-commit mailing list