CVS User Account cvsuser
Tue Jul 11 07:35:19 PDT 2006
Log Message:
-----------
Barrel of changes to documentation added during Conference

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.sgml (r1.14 -> r1.15)
        maintenance.sgml (r1.20 -> r1.21)
        slonik_ref.sgml (r1.51 -> r1.52)
        slony.sgml (r1.30 -> r1.31)
        testbed.sgml (r1.8 -> r1.9)

-------------- next part --------------
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.14
retrieving revision 1.15
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.14 -r1.15
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -4,7 +4,6 @@
 
 <indexterm><primary>adding objects to replication</primary></indexterm>
 
-
 <para>You may discover that you have missed replicating things that
 you wish you were replicating.</para>
 
@@ -76,12 +75,12 @@
 have to interrupt normal activity to introduce replication.</quote>
 </para>
 
-<para> Instead, you can add the table via
+<para> Instead, you may add the table via
 <application>psql</application> on each node.
 
 </para> </listitem>
 
-<listitem><Para> Create a new replication set <xref linkend="stmtcreateset">
+<listitem><para> Create a new replication set <xref linkend="stmtcreateset">
 </para></listitem>
 <listitem><para> 
 Add the table to the new set <xref linkend="stmtsetaddtable"> 
@@ -110,17 +109,39 @@
 <itemizedlist>
     <listitem><para> You absolutely <emphasis>must not</emphasis> include transaction control commands, particularly <command>BEGIN</command> and <command>COMMIT</command>, inside these DDL scripts. &slony1; wraps DDL scripts with a <command>BEGIN</command>/<command>COMMIT</command> pair; adding extra transaction control will mean that parts of the DDL will commit outside the control of &slony1; </para></listitem>
 
-    <listitem><Para> Avoid, if possible, having quotes in the DDL script </para> </listitem>
+<listitem><para> Before version 1.2, it was necessary to be
+exceedingly restrictive about what you tried to process using
+<xref linkend="stmtddlscript">. </para>
+
+<para> You could not have anything <command>'quoted'</command> in the
+script, as this would not be stored and forwarded properly.  As of
+1.2, quoting is now handled properly. </para>
+
+<para> If you submitted a series of DDL statements, the later ones
+could not make reference to objects created in the earlier ones, as
+the entire set of statements was submitted as a single query, where
+the query plan was based on the state of the database at
+the <emphasis>beginning,</emphasis> before any modifications had been
+made.  As of 1.2, if there are 12 SQL statements, they are each
+submitted individually, so that <command> alter table x add column c1
+integer; </command> may now be followed by <command> alter table x
+alter column c1 set not null; </command>.</para>
+
 </itemizedlist>
 
 </para></listitem>
+
 <listitem><para> How to remove replication for a node</para>
+
 <para> You will want to remove the various &slony1; components connected to the database(s).</para>
 
-<para> We will just consider, for now, doing this to one node. If you have multiple nodes, you will have to repeat this as many times as necessary.</para>
+<para> We will just consider, for now, doing this to one node. If you
+have multiple nodes, you will have to repeat this as many times as
+necessary.</para>
 
 <para> Components to be Removed: </para>
 <itemizedlist>
+
 <listitem><para>    Log Triggers / Update Denial Triggers 
 
 </para></listitem>
@@ -136,48 +157,102 @@
 
 <para> How To Conveniently Handle Removal</para>
 <itemizedlist>
-<listitem><para>
-    You may use the Slonik <xref linkend="stmtdropnode"> command to remove the node from the cluster. This will lead to the triggers and everything in the cluster schema being dropped from the node. The <xref linkend="slon"> process will automatically die off. 
-</para></listitem>
-<listitem><para>
-
-    In the case of a failed node (where you used <xref linkend="stmtfailover"> to switch to another node), you may need to use <xref linkend="stmtuninstallnode"> to drop out the triggers and schema and functions. 
-</para></listitem>
-<listitem><para>
 
-    If the above things work out particularly badly, you could submit the SQL command <command>DROP SCHEMA "_ClusterName" CASCADE;</command>, which will drop out &slony1; functions, tables, and triggers alike. 
+<listitem><para> You may use the Slonik <xref linkend="stmtdropnode">
+command to remove the node from the cluster. This will lead to the
+triggers and everything in the cluster schema being dropped from the
+node. The <xref linkend="slon"> process will automatically die
+off.</para></listitem>
+
+<listitem><para> In the case of a failed node (where you
+used <xref linkend="stmtfailover"> to switch to another node), you may
+need to use <xref linkend="stmtuninstallnode"> to drop out the
+triggers and schema and functions.</para>
+
+<para> If the node failed due to some dramatic hardware failure
+(<emphasis>e.g.</emphasis> disk drives caught fire), there may not be
+a database left on the failed node; it would only be expected to
+survive if the failure was one involving a network failure where
+the <emphasis>database</emphasis> was fine, but you were forced to
+drop it from replication due to (say) some persistent network outage.
+</para></listitem>
+
+<listitem><para> If the above things work out particularly badly, you
+could submit the SQL command <command>DROP SCHEMA "_ClusterName"
+CASCADE;</command>, which will drop out &slony1; functions, tables,
+and triggers alike.  That is generally less suitable
+than <xref linkend="stmtuninstallnode">, because that command not only
+drops the schema and its contents, but also removes any columns added
+in using <xref linkend= "stmttableaddkey">.
 </para></listitem>
 </itemizedlist>
 </listitem>
 
 <listitem><para> Adding A Node To Replication</para>
 
-<para>Things are not fundamentally different whether you are adding a brand new, fresh node, or if you had previously dropped a node and are recreating it. In either case, you are adding a node to replication. </para>
+<para>Things are not fundamentally different whether you are adding a
+brand new, fresh node, or if you had previously dropped a node and are
+recreating it. In either case, you are adding a node to
+replication. </para>
 
 <para>The needful steps are thus... </para>
 <itemizedlist>
-<listitem><para>
-   Determine the node number and any relevant DSNs for the new node.  Use &postgres; command <command>createdb</command> to create the database; add the table definitions for the tables that are to be replicated, as &slony1; does not automatically propagate that information.
-</para></listitem>
-<listitem><para>
-   If the node had been a failed node, you may need to issue the <xref linkend="slonik"> command <xref linkend="stmtdropnode"> in order to get rid of its vestiges in the cluster, and to drop out the schema that &slony1; creates.
+
+<listitem><para> Determine the node number and any relevant DSNs for
+the new node.  Use &postgres; command <command>createdb</command> to
+create the database; add the table definitions for the tables that are
+to be replicated, as &slony1; does not automatically propagate that
+information.
+</para>
+
+<para> If you do not have a perfectly clean SQL script to add in the
+tables, then run the tool <command> slony1_extract_schema.sh</command>
+from the <filename>tools</filename> directory to get the user schema
+from the origin node with all &slony1; <quote>cruft</quote>
+removed.  </para>
+</listitem>
+
+<listitem><para> If the node had been a failed node, you may need to
+issue the <xref linkend="slonik">
+command <xref linkend="stmtdropnode"> in order to get rid of its
+vestiges in the cluster, and to drop out the schema that &slony1;
+creates.
 </para></listitem>
-<listitem><para>
-    Issue the slonik command <xref linkend="stmtstorenode"> to establish the new node.
+
+<listitem><para> Issue the slonik
+command <xref linkend="stmtstorenode"> to establish the new node.
 </para></listitem>
-<listitem><para>
-    At this point, you may start a <xref linkend="slon">  daemon against the new node. It may not know much about the other nodes yet, so the logs for this node may be pretty quiet.
+
+<listitem><para> At this point, you may start a &lslon; daemon against
+the new node. It may not know much about the other nodes yet, so the
+logs for this node may be pretty quiet.
 </para></listitem>
-<listitem><para>
-    Issue the slonik command <xref linkend="stmtstorepath"> to indicate how <xref linkend="slon"> processes are to communicate with the new node.  In &slony1; version 1.1 and later, this will then automatically generate <link linkend="listenpaths"> listen path </link> entries; in earlier versions, you will need to use <xref linkend="stmtstorelisten"> to generate them manually.
+
+<listitem><para> Issue the slonik
+command <xref linkend="stmtstorepath"> to indicate
+how <xref linkend="slon"> processes are to communicate with the new
+node.  In &slony1; version 1.1 and later, this will then automatically
+generate <link linkend="listenpaths"> listen path </link> entries; in
+earlier versions, you will need to
+use <xref linkend="stmtstorelisten"> to generate them manually.
 </para></listitem>
-<listitem><para>
-   Issue the slonik command <xref linkend="stmtsubscribeset"> to subscribe the node to some replication set. 
+
+<listitem><para> At this point, it is an excellent idea to run
+the <filename>tools</filename>
+script <command>test_slony_state-dbi.pl</command>, which rummages
+through the state of the entire cluster, pointing out any anomalies
+that it finds.  This includes a variety of sorts of communications
+problems.</para> </listitem>
+
+<listitem><para> Issue the slonik
+command <xref linkend="stmtsubscribeset"> to subscribe the node to
+some replication set.
 </para></listitem>
+
 </itemizedlist>
 </listitem>
 
-<listitem><para> How do I reshape the subscriptions?</para>
+<listitem><para> How do I reshape subscriptions?</para>
 
 <para> For instance, I want subscriber node 3 to draw data from node
 1, when it is presently drawing data from node 2. </para>
@@ -190,6 +265,53 @@
 the subscriptions.  Subscriptions will not be started from scratch;
 they will merely be reconfigured.  </para></listitem>
 
+<listitem><para> How do I use <xref linkend="logshipping"> </para> </listitem>
+
+<listitem><para> How do I know replication is working?</para> 
+
+<para> The ultimate proof is in looking at whether data added at the
+origin makes it to the subscribers.  That's a <quote>simply matter of
+querying</quote>.</para>
+
+<para> There are several ways of examining replication status, however: </para>
+<itemizedlist>
+<listitem><para> Look in the &lslon; logs.</para> 
+
+<para> They won't say too much, even at very high debugging levels, on
+an origin node; at debugging level 2, you should see, on subscribers,
+that SYNCs are being processed.  As of version 1.2, the information
+reported for SYNC processing includes counts of the numbers of tables
+processed, as well as numbers of tuples inserted, deleted, and
+updated.</para> </listitem>
+
+<listitem><para> Look in the view <command> sl_status <command>, on
+the origin node. </para>
+
+<para> This view will tell how far behind the various subscribing
+nodes are in processing events from the node where you run the query.
+It will only be <emphasis>very</emphasis> informative on a node that
+originates a replication set.</para> </listitem>
+
+<listitem><para> Run the <filename>tools</filename>
+script <command>test_slony_state-dbi.pl</command>, which rummages
+through the state of the entire cluster, pointing out any anomalies
+that it notices, as well as some information on the status of each
+node. </para> </listitem>
+
+</itemizedlist>
+
+</listitem>
+
+<listitem><para> What happens when I fail over?</para> 
+
+<para> To be written...</para> </listitem>
+
+<listitem><para> How do I <quote>move master</quote> to a new node? </para> 
+
+<para> Obviously, use <xref linkend="stmtmoveset">; more details
+should be added...</para>
+</listitem>
+
 </itemizedlist>
 </sect1>
 <!-- Keep this comment at the end of the file
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.20
retrieving revision 1.21
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.20 -r1.21
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -35,7 +35,6 @@
 &pglistener; growing large and will slow
 replication.</para></listitem>
 
-
 <listitem> <para> The <link linkend="dupkey"> Duplicate Key Violation
 </link> bug has helped track down some &postgres; race conditions.
 One remaining issue is that it appears that is a case where
@@ -46,6 +45,13 @@
 sl_log_1;</command> periodically to avoid the problem
 occurring. </para> </listitem>
 
+<listitem><para> As of version 1.2, <quote>log switching</quote>
+functionality is in place; every so often, it seeks to switch between
+storing data in &sl-log-1; and &sl-log-2; so that it may seek
+to <command>TRUNCATE</command> the <quote>elder</quote> data.</para>
+
+<para> That means that on a regular basis, these tables are completely cleared out, so that you will not suffer from them having grown to some significant size, due to heavy load, after which they are incapable of shrinking back down </para> </listitem>
+
 </itemizedlist>
 </para>
 
Index: slonik_ref.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.51
retrieving revision 1.52
diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.51 -r1.52
--- doc/adminguide/slonik_ref.sgml
+++ doc/adminguide/slonik_ref.sgml
@@ -578,7 +578,8 @@
        <listitem><para> Node ID of the node to remove.</para></listitem>
       </varlistentry>
       <varlistentry><term><literal> EVENT NODE = ival </literal></term>
-       <listitem><para> Node ID of the node to generate the event.</para></listitem>
+       <listitem><para> Node ID of the node to generate the event; default is 1.
+       </para></listitem>
       </varlistentry>
      </variablelist>
     </para>
@@ -597,6 +598,11 @@
    <para>After dropping a node, you may also need to recycle
    connections in your application.</para></warning>
 
+   <warning><para> You cannot submit this to an <command>EVENT
+   NODE</command> that is the number of the node being dropped; the
+   request must go to some node that will remain in the
+   cluster. </para></warning>
+
    </refsect1>
    <refsect1><title>Example</title>
     <programlisting>
Index: testbed.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/testbed.sgml,v
retrieving revision 1.8
retrieving revision 1.9
diff -Ldoc/adminguide/testbed.sgml -Ldoc/adminguide/testbed.sgml -u -w -r1.8 -r1.9
--- doc/adminguide/testbed.sgml
+++ doc/adminguide/testbed.sgml
@@ -4,10 +4,33 @@
 <indexterm><primary>test bed framework</primary></indexterm>
 
 <para> As of version 1.1.5, &slony1; has a common test bed framework
-intended to better support performing a comprehensive set of tests.
-The code lives in the source tree under the <filename> tests
+intended to better support running a comprehensive set of tests at
+least somewhat automatically.  Older tests
+used <application>pgbench</application> (not
+a <emphasis>bad</emphasis> thing) but were troublesome to automate
+because they were set up to spawn each &lslon; in
+an <application>xterm</application> for the user
+to <emphasis>watch</emphasis>.</para>
+
+<para> The new test framework is mostly written in Bourne shell, and
+is intended to be portable to both Bash (widely used on Linux) and
+Korn shell (widely found on commercial UNIX systems).  The code lives
+in the source tree under the <filename> tests
 </filename> directory.</para>
 
+<para> At present, nearly all of the tests make use of only two
+databases that, by default, are on a single &postgres; postmaster on
+one host.  This is perfectly fine for those tests that involve
+verifying that &slony1; functions properly on various sorts of data.
+Those tests do things like varying date styles, and creating tables
+and sequences that involve unusual names to verify that quoting is
+being handled properly. </para>
+
+<para> It is also possible to configure environment variables so that
+the replicated nodes will be placed on different database backends,
+optionally on remote hosts, running varying versions of
+&postgres;.</para>
+
 <para>Here are some of the vital files...</para>
 
 <itemizedlist>
@@ -46,16 +69,17 @@
 <envar>PGBINDIR13</envar> which allows you to specify a separate path
 for each database instance.  That will be particularly useful when
 testing interoperability of &slony1; across different versions of
-&postgres;. In order to create a database of each respective version,
-you need to point to an <application>initdb</application> of the
-appropriate version.</para> </glossdef> </glossentry>
+&postgres; on different platforms. In order to create a database of
+each respective version, you need to point to
+an <application>initdb</application> of the appropriate
+version.</para> </glossdef> </glossentry>
 
 <glossentry><glossterm> <envar>PGPORT</envar> </glossterm>
 <glossdef><para> This indicates what port the backend is on.  By
 default, 5432 is used. </para> 
 
 <para> There are also variables <envar>PORT1</envar> thru
-<envar>PORT13</envar> which allows you to specify a separate port
+<envar>PORT13</envar> which allow you to specify a separate port
 number for each database instance.  That will be particularly useful
 when testing interoperability of &slony1; across different versions of
 &postgres;. </para> </glossdef> </glossentry>
@@ -85,7 +109,8 @@
 <filename>slonyregress13</filename> are used.
 </para>
 
-<para> You may override these from the environment. </para></glossdef>
+<para> You may override these from the environment if you have some
+reason to use different names. </para></glossdef>
 </glossentry>
 
 <glossentry>
@@ -109,6 +134,38 @@
 
 </glosslist>
 
+<para> Within each test, you will find the following files: </para>
+
+<itemizedlist>
+<listitem><para> <filename>README</filename> </para> 
+
+<para> This file contains a description of the test, and is displayed
+to the reader when the test is invoked. </para> </listitem>
+
+<listitem><para> <filename>generate_dml.sh</filename> </para> 
+<para> This contains script code that generates SQL to perform updates. </para> </listitem>
+<listitem><para> <filename>init_add_tables.ik</filename> </para> 
+<para>  This is a <xref linkend="slonik"> script for adding the tables for the test to repliation. </para> </listitem>
+<listitem><para> <filename>init_cluster.ik</filename> </para> 
+<para> <xref linkend="slonik"> to initialize the cluster for the test.</para> </listitem>
+<listitem><para> <filename>init_create_set.ik</filename> </para> 
+<para> <xref linkend="slonik"> to initialize additional nodes to be used in the test. </para> </listitem>
+<listitem><para> <filename>init_schema.sql</filename> </para> 
+<para> An SQL script to create the tables and sequences required at the start of the test.</para> </listitem>
+<listitem><para> <filename>init_data.sql</filename> </para> 
+<para> An SQL script to initialize the schema with whatever state is required for the <quote>master</quote> node.  </para> </listitem>
+<listitem><para> <filename>init_subscribe_set.ik</filename> </para> 
+<para> A <xref linkend="slonik"> script to set up subscriptions.</para> </listitem>
+<listitem><para> <filename>settings.ik</filename> </para> 
+<para> A shell script that is used to control the size of the cluster, how many nodes are to be created, and where the origin is.</para> </listitem>
+<listitem><para> <filename>schema.diff</filename> </para> 
+<para> A series of SQL queries, one per line, that are to be used to validate that the data matches across all the nodes.  Note that in order to avoid spurious failures, the queries must use unambiguous <command>ORDER BY</command> clauses.</para> </listitem>
+</itemizedlist>
+
+<para> If there are additional test steps, such as
+running <xref linkend="stmtddlscript">,
+additional <xref linkend="slonik"> and SQL scripts may be necessary.</para>
+
 </sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
Index: slony.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v
retrieving revision 1.30
retrieving revision 1.31
diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.30 -r1.31



More information about the Slony1-commit mailing list