CVS User Account cvsuser
Thu Apr 14 22:22:25 PDT 2005
Log Message:
-----------
Normalize various SGML documentation in order to cut down on
error messages

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        adminscripts.sgml (r1.22 -> r1.23)
        defineset.sgml (r1.14 -> r1.15)
        help.sgml (r1.14 -> r1.15)
        intro.sgml (r1.13 -> r1.14)
        logshipping.sgml (r1.7 -> r1.8)
        plainpaths.sgml (r1.7 -> r1.8)
        slonik_ref.sgml (r1.20 -> r1.21)
        usingslonik.sgml (r1.8 -> r1.9)

-------------- next part --------------
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.22
retrieving revision 1.23
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.22 -r1.23
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -17,8 +17,7 @@
 couple of occasions, to pretty calamitous actions, so the behavior has
 been changed so that the scripts simply submit output to standard
 output.  An administrator should review the script
-<emphasis>before</emphasis> submitting it to <xref
-linkend="slonik">.</para>
+<emphasis>before</emphasis> submitting it to <xref linkend="slonik">.</para>
 
 <sect2><title>Node/Cluster Configuration - cluster.nodes</title>
 
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.20
retrieving revision 1.21
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.20 -r1.21
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -1,6 +1,5 @@
 <article id="slonikref">
 <title>Slonik Command Summary</title>
-
    <sect1><title>Introduction</title>
     
     <para>
@@ -84,7 +83,7 @@
 
      <para> Those commands are grouped together into one transaction
       per participating node. </para>
-<!-- ************************************************************ -->
+<!-- ************************************************************ --></sect3></sect2></sect1></article>
 
  <reference id="metacmds">
   <title>Slonik Meta Commands</title>
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.14
retrieving revision 1.15
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.14 -r1.15
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -40,11 +40,6 @@
 KirovOpenSourceCommunity: Slony</ulink> may be the place to
 go.</para></listitem> 
 
-<listitem><para> A <ulink
-url="http://www.kuzin.net/work/sloniki-privet.html"> Russian Setup /
-Example / HOWTO </ulink> is available for Russian readers. </para>
-</listitem>
-
 <listitem><para> <ulink url="http://pgpool.projects.postgresql.org/"
 id="pgpool"> <application> pgpool </application> </ulink> </para>
 
@@ -69,8 +64,8 @@
 config files in an XML-based format that the tool transforms into a
 Slonik script</para></listitem>
 
-<listitem><Para><ulink url="http://freshmeat.net/projects/slonyi/">
-Freshmeat on &slony1; </ulink>
+<listitem><para><ulink url="http://freshmeat.net/projects/slonyi/">
+Freshmeat on &slony1; </ulink></para></listitem>
 
 </itemizedlist>
 </sect2>
Index: logshipping.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/logshipping.sgml,v
retrieving revision 1.7
retrieving revision 1.8
diff -Ldoc/adminguide/logshipping.sgml -Ldoc/adminguide/logshipping.sgml -u -w -r1.7 -r1.8
--- doc/adminguide/logshipping.sgml
+++ doc/adminguide/logshipping.sgml
@@ -2,15 +2,15 @@
 <sect1 id="logshipping">
 <title>Log Shipping - &slony1; with Files</title>
 
-<para> One of the new features for &slony1; 1.1 is the ability to
-serialize the updates to go out into log files that can be kept in a
-spool directory.
+<para> One of the new features for 1.1 is the ability to serialize the
+updates to go out into log files that can be kept in a spool
+directory.</para>
 
 <para> The spool files could then be transferred via whatever means
 was desired to a <quote>slave system,</quote> whether that be via FTP,
 rsync, or perhaps even by pushing them onto a 1GB <quote>USB
 key</quote> to be sent to the destination by clipping it to the ankle
-of some sort of <quote>avian transport</quote> system.
+of some sort of <quote>avian transport</quote> system.</para>
 
 <para> There are plenty of neat things you can do with a data stream
 in this form, including:
@@ -18,48 +18,48 @@
 <itemizedlist>
   
   <listitem><para> Replicating to nodes that
-  <emphasis>aren't</emphasis> securable
+  <emphasis>aren't</emphasis> securable</para></listitem>
 
   <listitem><para> Replicating to destinations where it is not
-  possible to set up bidirection communications
+  possible to set up bidirection communications</para></listitem>
 
-  <listitem><para> Supporting a different form of <acronym/PITR/
+  <listitem><para> Supporting a different form of <acronym>PITR</acronym>
   (Point In Time Recovery) that filters out read-only transactions and
-  updates to tables that are not of interest.
+  updates to tables that are not of interest.</para></listitem>
 
   <listitem><para> If some disaster strikes, you can look at the logs
-  of queries in detail
+  of queries in detail</para>
 
   <para> This makes log shipping potentially useful even though you
-  might not intend to actually create a log-shipped node.
+  might not intend to actually create a log-shipped node.</para></listitem>
 
   <listitem><para> This is a really slick scheme for building load for
-  doing tests
+  doing tests</para></listitem>
 
   <listitem><para> We have a data <quote>escrow</quote> system that
-  would become incredibly cheaper given log shipping
+  would become incredibly cheaper given log shipping</para></listitem>
 
   <listitem><para> You may apply triggers on the <quote>disconnected
-  node </quote> to do additional processing on the data
+  node </quote> to do additional processing on the data</para>
 
   <para> For instance, you might take a fairly <quote>stateful</quote>
   database and turn it into a <quote>temporal</quote> one by use of
   triggers that implement the techniques described in
   <citation>Developing Time-Oriented Database Applications in SQL
   </citation> by <ulink url= "http://www.cs.arizona.edu/people/rts/">
-  Richard T. Snodgrass</ulink>.
+  Richard T. Snodgrass</ulink>.</para></listitem>
 
-</itemizedlist>
+</itemizedlist></para>
 
 <qandaset>
 <qandaentry>
 
 <question> <para> Where are the <quote>spool files</quote> for a
-subscription set generated?
+subscription set generated?</para>
 </question>
 
 <answer> <para> Any <link linkend="slon"> slon </link> node can
-generate them by adding the <option>-a</option> option.
+generate them by adding the <option>-a</option> option.</para>
 </answer>
 </qandaentry>
 <qandaentry>
@@ -126,9 +126,9 @@
 to rerun <application>slony1_dump.sh</application>:
 
 <itemizedlist>
-<listitem><para><command> SUBSCRIBE_SET </command> 
+<listitem><para><command> SUBSCRIBE_SET </command></para></listitem> 
 
-</itemizedlist>
+</itemizedlist></para>
 
 <para> A number of event types <emphasis> are </emphasis> handled in
 such a way that log shipping copes with them:
@@ -136,23 +136,23 @@
 <itemizedlist>
 
 <listitem><para><command>SYNC </command> events are, of course,
-handled.
+handled.</para></listitem>
 
-<listitem><para><command>DDL_SCRIPT</command> is handled.
+<listitem><para><command>DDL_SCRIPT</command> is handled.</para></listitem>
 
-<listitem><para><command> UNSUBSCRIBE_SET </command> 
+<listitem><para><command> UNSUBSCRIBE_SET </command></para> 
 
 <para> This event, much like <command>SUBSCRIBE_SET</command> is not
 handled by the log shipping code.  But its effect is, namely that
 <command>SYNC</command> events on the subscriber node will no longer
-contain updates to the set.
+contain updates to the set.</para>
 
 <para> Similarly, <command>SET_DROP_TABLE</command>,
 <command>SET_DROP_SEQUENCE</command>,
 <command>SET_MOVE_TABLE</command>,
 <command>SET_MOVE_SEQUENCE</command> <command>DROP_SET</command>,
 <command>MERGE_SET</command>, will be handled
-<quote>appropriately</quote>.
+<quote>appropriately</quote>.</para></listitem>
 
 <listitem><para> The various events involved in node configuration are
 irrelevant to log shipping:
@@ -163,7 +163,7 @@
 <command>STORE_PATH</command>,
 <command>DROP_PATH</command>,
 <command>STORE_LISTEN</command>,
-<command>DROP_LISTEN</command>
+<command>DROP_LISTEN</command></para></listitem>
 
 <listitem><para> Events involved in describing how particular sets are
 to be initially configured are similarly irrelevant:
@@ -172,7 +172,7 @@
 <command>SET_ADD_TABLE</command>,
 <command>SET_ADD_SEQUENCE</command>,
 <command>STORE_TRIGGER</command>,
-<command>DROP_TRIGGER</command>,
+<command>DROP_TRIGGER</command>,</para></listitem>
 
 </itemizedlist>
 </para>
@@ -183,7 +183,7 @@
 could failover to.  This would be quite useful if you were trying to
 construct a cluster of (say) 6 nodes; you could start by creating one
 subscriber, and then use log shipping to populate the other 4 in
-parallel.
+parallel.</para>
 
 <para> This usage is not supported, but presumably one could add the
 &slony1; configuration to the node, and promote it into being a new
@@ -206,7 +206,7 @@
 your <application> psql</application> session will <command> ABORT
 </command>, and then run through the remainder of that SYNC file
 looking for a <command>COMMIT</command> or <command>ROLLBACK</command>
-so that it can try to move on to the next transaction.
+so that it can try to move on to the next transaction.</para>
 
 <para> But we <emphasis> know </emphasis> that the entire remainder of
 the file will fail!  It is futile to go through the parsing effort of
@@ -217,21 +217,21 @@
 <itemizedlist>
 
 <listitem><para> Read the first few lines of the file, up to and
-including the <function> setsyncTracking_offline() </function> call.  
+including the <function> setsyncTracking_offline() </function> call.</para></listitem>  
 
-<listitem><para> Try to apply it that far.
+<listitem><para> Try to apply it that far.</para></listitem>
 
 <listitem><para> If that failed, then it is futile to continue;
 <command>ROLLBACK</command> the transaction, and perhaps consider
-trying the next file.
+trying the next file.</para></listitem>
 
 <listitem><para> If the <function> setsyncTracking_offline()
 </function> call succeeds, then you have the right next SYNC file, and
 should apply it.  You should probably <command>ROLLBACK</command> the
 transaction, and then use <application>psql</application> to apply the
-entire file full of updates.
+entire file full of updates.</para></listitem>
 
-</itemizedlist>
+</itemizedlist></para>
 
 <para> In order to support the approach of grabbing just the first few
 lines of the sync file, the format has been set up to have a line of
@@ -243,7 +243,7 @@
 start transaction;
 select "_T1".setsyncTracking_offline(1, '744', '745');
 --------------------------------------------------------------------
-</programlisting>
+</programlisting></para></listitem>
 
 </itemizedlist>
 </sect2>
Index: usingslonik.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/usingslonik.sgml,v
retrieving revision 1.8
retrieving revision 1.9
diff -Ldoc/adminguide/usingslonik.sgml -Ldoc/adminguide/usingslonik.sgml -u -w -r1.8 -r1.9
--- doc/adminguide/usingslonik.sgml
+++ doc/adminguide/usingslonik.sgml
@@ -283,15 +283,15 @@
 pa_server, pa_client) from _slonycluster.sl_path;</command></para>
 
 <para> The result of this set of queries is to regenerate
-<emphasis/and propagate/ the listen paths.  By running the main
-<function/ _slonycluster.storelisten()/ function,
-<command/STORE_LISTEN/ events are raised to cause the listen paths to
-propagate to the other nodes in the cluster.
+<emphasis>and propagate</emphasis> the listen paths.  By running the main
+<function> _slonycluster.storelisten()</function> function,
+<command>STORE_LISTEN</command> events are raised to cause the listen paths to
+propagate to the other nodes in the cluster.</para>
 
-<para> If there was a <emphasis/local/ problem on one node, and you
+<para> If there was a <emphasis>local</emphasis> problem on one node, and you
 didn't want the updates to propagate (this would be an unusual
 situation; you almost certainly want to fix things
-<emphasis/everywhere/), the queries would instead be:
+<emphasis>everywhere</emphasis>), the queries would instead be:</para>
 
 <para> <command> select
 slonycluster.droplisten_int(li_origin,li_provider,li_receiver) from
@@ -312,35 +312,35 @@
 <listitem><para> The <quote>main</quote> function
 (<emphasis>e.g.</emphasis> - without the <command>_int</command>
 suffix) is called on a <quote>relevant</quote> node in the &slony1;
-cluster.
+cluster.</para>
 
 <para> In some cases, the function may be called on any node, and it
 can satisfactorily propagate to other nodes.  That is true for <xref
 linkend="function.storelisten-integer-integer-integer">, for instance.
 In other cases, the function needs to be called on some particular
 node because that is the only place where data may be reasonably
-validated.  For instance, <xref linkend=
-"function.subscribeset-integer-integer-integer-boolean"> must be
-called on the receivernode.
+validated.  For instance, <xref
+	  linkend="function.subscribeset-integer-integer-integer-boolean"> must be
+called on the receivernode.</para></listitem>
 
 <listitem><para> If that <quote>main</quote> function succeeds, then
 the configuration change is performed on the local node, and an event
 is created using <xref linkend="function.createevent-name-text"> to
 cause the configuration change to propagate to all of the other nodes
-in the &slony1; cluster.
+in the &slony1; cluster.</para></listitem>
 
 <listitem><para> Thirdly, the <command>_int</command> version of the
-function must be run.  
+function must be run.</para>  
 
 <para> In some cases, where functions are idempotent, the node where
 the <quote>main</quote> function runs can do this fairly early in
-processing.
+processing.</para>
 
 <para> When the event propagates, the other nodes will run the
 <command>_int</command> version, we rather hope with good
-success. </para>
+success. </para></listitem>
 
-</itemizedlist>
+</itemizedlist></para>
 
 </sect1>
 <!-- Keep this comment at the end of the file


More information about the Slony1-commit mailing list