CVS User Account cvsuser
Fri Apr 15 20:33:18 PDT 2005
Log Message:
-----------
Moved the sample from README to SAMPLE
Moved install notes from README to INSTALL
Took items off of TODO that are now completed

Modified Files:
--------------
    slony1-engine:
        README (r1.10 -> r1.11)
        TODO (r1.2 -> r1.3)

Added Files:
-----------
    slony1-engine:
        INSTALL (r1.1)
        SAMPLE (r1.1)

-------------- next part --------------
Index: TODO
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/TODO,v
retrieving revision 1.2
retrieving revision 1.3
diff -LTODO -LTODO -u -w -r1.2 -r1.3
--- TODO
+++ TODO
@@ -1,43 +1,4 @@
 * Documentation
 ----------------
 
--Provide instructions on how to upgrade form 1.0 to 1.1
-
--Add more documentation on how to add/drop nodes from a running replication 
- cluster
-
-
-* Backend
-----------------
-
-Modify the @NAMESPACE at .logtrigger(namespace, number, keyform) function
-so it periodically does a SYNC even if nothing else causes one to be
-done.
-
-This amounts to running some equivalent to the code in sync_thread.c:
-
-  -> Start a serializable transaction
-
-  -> Get last value of sl_action_seq
-
-  -> If value has changed, or sync interval timeout has arrived
-
-     -> Generate SYNC event 
-                select @NAMESPACE at .createEvent('_ at NAMESPACE@', 'SYNC', NULL);
-
-* slonspool
-----------------
-
-"Log Spooling" version of slon
-
-This writes data to files rather than connecting to a destination
-database.
-
-* named nodes
-----------------
-
-Augment slonik to allow users to give nodes symbolic names that may be
-used instead of node numbers throughout Slonik.
-
-That permits renaming nodes, and using logical names rather than
-(likely) cryptic numbers.
+-Provide instructions on how to upgrade from 1.0 to 1.1
--- /dev/null
+++ SAMPLE
@@ -0,0 +1,293 @@
+Creating a sample database with application
+--------------------------------------------
+
+$Id: SAMPLE,v 1.1 2005/04/15 19:33:15 cbbrowne Exp $
+
+
+As a first test scenario, the pgbench test program that is shipped with
+PostgreSQL in the ./contrib directory and produces a not too light load
+of concurrent transactions will satisfy our needs.
+
+NOTE: the PL/PgSQL procedural language MUST BE INSTALLED into this
+database
+
+LOCAL WILL BE REMOTE
+
+The Slony replication system is based on triggers.  One of the nice
+side effects of this is that you may, in theory, replicate between two
+databases under the same postmaster.  Things can get a little
+confusing when we're talking about local vs. remote database in that
+context.  To avoid confusion, from here on we will strictly use the
+term "node" to mean one database and its replication daemon program
+slon.
+
+To make this example work for people with one or two computer systems
+at their disposal, we will define a few shell variables that will be
+used throughout the following scripts.
+
+CLUSTER=test1
+DBNAME1=pgbench_node1
+DBNAME2=pgbench_node2
+HOST1=<host name of pgbench_node1>
+HOST2=<host name of pgbench_node2>
+SLONY_USER=<PostgreSQL superuser to connect as for replication>
+PGBENCH_USER=<normal user to run the pgbench application>
+
+Here, we assume that the the Unix user executing all the commands in
+this example has the ability to establish all database connections.
+This is not intended to become a "pg_hba.conf HOWTO", so replacing
+whatever sophisticated authentication method is used with "trust"
+until replication works would not be a bad idea.
+
+PREPARING THE TWO DATABASES
+
+As of this writing Slony-I does not attempt to automatically copy the
+table definitions when a node subscribes to a data set.  Because of this,
+we have to create one full and one schema-only pgbench database.
+
+createdb -O $PGBENCH_USER -h $HOST1 $DBNAME1
+createdb -O $PGBENCH_USER -h $HOST2 $DBNAME2
+pgbench -i -s 1 -U $PGBENCH_USER -h $HOST1 $DBNAME1
+pg_dump -s -U postgres -h $HOST1 $DBNAME1 | psql -U postgres -h $HOST2 $DBNAME2
+
+From this moment on, the pgbench test program can be started to
+produce transaction load.  It is recommended to run the pgbench
+application in the foreground of a separate terminal.  This gives the
+flexibility to stop and restart it with different parameters at any
+time.  The command to run it would look like this:
+
+pgbench [-n] -s 1 -c <n_clients> -U $PGBENCH_USER -h $HOST1 -t <n_trans> $DBNAME1
+
+    * -n suppresses deleting the content of the history table and
+    * vacuuming the database at the start of pgbench.
+    * -c <n_clients> specifies the number of concurrent clients to
+    * simulate (should be between 1 and 10).  Note that a high number
+    * will cause a significant load on the server as any of these
+    * clients will try to run transactions as fast as possible.
+    * -t <n_trans> specifies the number of transactions every client
+    * executes before terminating.  A value of 1000 is a good point to
+    * start. 
+
+Configuring the databases for replication
+------------------------------------------
+
+Creating the configuration tables, stored procedures, triggers and
+setting up the configuration is done with the slonik command.  It is a
+specialized scripting aid that mostly calls stored procedures in the
+node databases.  The script to create the initial configuration for a
+simple master-slave setup of our pgbench databases looks like this:
+Script slony_sample1_setup.sh
+
+#!/bin/sh
+
+CLUSTER=test1
+DBNAME1=pgbench_node1
+DBNAME2=pgbench_node2
+HOST1=<host name of pgbench_node1>
+HOST2=<host name of pgbench_node2>
+SLONY_USER=<postgres superuser to connect as for replication>
+PGBENCH_USER=<normal user to run the pgbench application>
+
+slonik <<_EOF_
+    # ----
+    # This defines which namespace the replication system uses
+    # ----
+    cluster name = $CLUSTER;
+
+    # ----
+    # Admin conninfo's are used by the slonik program to connect
+    # to the node databases.  So these are the PQconnectdb arguments
+    # that connect from the administrators workstation (where
+    # slonik is executed).
+    # ----
+    node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
+    node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
+
+    # ----
+    # Initialize the first node.  The id must be 1.
+    # This creates the schema "_test1" containing all replication
+    # system specific database objects.
+    # ----
+    init cluster ( id = 1, comment = 'Node 1' );
+
+    # ----
+    # The pgbench table history does not have a primary key or
+    # any other unique constraint that could be used to identify
+    # a row.  The following command adds a bigint column named
+    # "_Slony-I_test1_rowID" to the table.  It will have a default
+    # value of nextval('"_test1".sl_rowid_seq'), be unique and not
+    # null.  All existing rows will be initialized with a number.
+    # ----
+    table add key ( node id = 1, fully qualified name = 'public.history' );
+
+    # ----
+    # The Slony replication system organizes tables in sets.  The
+    # smallest unit another node can subscribe is a set.  Usually the
+    # tables contained in one set would be all tables that have
+    # relationships to each other.  The following commands create
+    # one set containing all 4 pgbench tables.  The "master" or origin
+    # of the set is node 1.
+    # ----
+    create set ( id = 1, origin = 1, comment = 'All pgbench tables' );
+    set add table ( set id = 1, origin = 1,
+        id = 1, fully qualified name = 'public.accounts',
+        comment = 'Table accounts' );
+    set add table ( set id = 1, origin = 1,
+        id = 2, fully qualified name = 'public.branches',
+        comment = 'Table branches' );
+    set add table ( set id = 1, origin = 1,
+        id = 3, fully qualified name = 'public.tellers',
+        comment = 'Table tellers' );
+    set add table ( set id = 1, origin = 1,
+        id = 4, fully qualified name = 'public.history',
+        key = serial,
+        comment = 'Table history' );
+
+    # ----
+    # Create the second node, tell the two nodes how to connect to 
+    # each other and that they should listen for events on each
+    # other.  Note that these conninfo arguments are used by the
+    # slon daemon on node 1 to connect to the database of node 2
+    # and vice versa.  So if the replication system is supposed to
+    # use a separate backbone network between the database servers,
+    # this is the place to tell it.
+    # ----
+    store node ( id = 2, comment = 'Node 2' );
+    store path ( server = 1, client = 2,
+        conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER');
+    store path ( server = 2, client = 1,
+        conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER');
+    store listen ( origin = 1, provider = 1, receiver = 2 );
+    store listen ( origin = 2, provider = 2, receiver = 1 );
+_EOF_
+
+Time to replicate
+-------------------
+
+Is the pgbench application still running?
+
+At this point we have 2 databases that are fully prepared.  One is the
+master database accessed by the pgbench application.  It is time now
+to start the replication daemons.
+
+On the system $HOST1, the command to start the replication daemon is
+
+slon $CLUSTER "dbname=$DBNAME1 user=$SLONY_USER"
+
+Since the replication daemon for node 1 is running on the same host as
+the database for node 1, there is no need to connect via TCP/IP socket
+for it.
+
+Likewise we start the replication daemon for node 2 on $HOST2 with
+
+slon $CLUSTER "dbname=$DBNAME2 user=$SLONY_USER"
+
+Even if the two daemons now will start right away and show a lot of
+message exchanging, they are not replicating any data yet.  What is
+going on is that they synchronize their information about the cluster
+configuration.
+
+To start replicating the 4 pgbench tables from node 1 to node 2 we
+have to execute the following script:
+
+slony_sample1_subscribe.sh:
+
+#!/bin/sh
+
+CLUSTER=test1
+DBNAME1=pgbench_node1
+DBNAME2=pgbench_node2
+HOST1=<host name of pgbench_node1>
+HOST2=<host name of pgbench_node2>
+SLONY_USER=<postgres superuser to connect as for replication>
+PGBENCH_USER=<normal user to run the pgbench application>
+
+slonik <<_EOF_
+    # ----
+    # This defines which namespace the replication system uses
+    # ----
+    cluster name = $CLUSTER;
+
+    # ----
+    # Admin conninfo's are used by the slonik program to connect
+    # to the node databases.  So these are the PQconnectdb arguments
+    # that connect from the administrators workstation (where
+    # slonik is executed).
+    # ----
+    node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
+    node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
+
+    # ----
+    # Node 2 subscribes set 1
+    # ----
+    subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
+_EOF_
+
+Shortly after this script is executed, the replication daemon on
+$HOST2 will start to copy the current content of all 4 replicated
+tables.  While doing so, of course, the pgbench application will
+continue to modify the database.  When the copy process is finished,
+the replication daemon on $HOST2 will start to catch up by applying
+the accumulated replication log.  It will do this in little steps, 10
+seconds worth of application work at a time.  Depending on the
+performance of the two systems involved, the sizing of the two
+databases, the actual transaction load and how well the two databases
+are tuned and maintained, this catchup process can be a matter of
+minutes, hours, or infinity.
+
+Checking the result
+-----------------------------
+
+To check the result of the replication attempt (actually, the
+intention was to create an exact copy of the first node, no?) the
+pgbench application must be stopped and any eventual replication
+backlog processed by node 2.  After that, we create data exports (with
+ordering) of the 2 databases and compare them:
+
+Script slony_sample1_compare.sh
+
+#!/bin/sh
+
+CLUSTER=test1
+DBNAME1=pgbench_node1
+DBNAME2=pgbench_node2
+HOST1=<host name of pgbench_node1>
+HOST2=<host name of pgbench_node2>
+SLONY_USER=<postgres superuser to connect as for replication>
+PGBENCH_USER=<normal user to run the pgbench application>
+
+echo -n "**** comparing sample1 ... "
+psql -U $PGBENCH_USER -h $HOST1 $DBNAME1 >dump.tmp.1.$$ <<_EOF_
+    select 'accounts:'::text, aid, bid, abalance, filler
+        from accounts order by aid;
+    select 'branches:'::text, bid, bbalance, filler
+        from branches order by bid;
+    select 'tellers:'::text, tid, bid, tbalance, filler
+        from tellers order by tid;
+    select 'history:'::text, tid, bid, aid, delta, mtime, filler,
+        "_Slony-I_${CLUSTER}_rowID"
+        from history order by "_Slony-I_${CLUSTER}_rowID";
+_EOF_
+psql -U $PGBENCH_USER -h $HOST2 $DBNAME2 >dump.tmp.2.$$ <<_EOF_
+    select 'accounts:'::text, aid, bid, abalance, filler
+        from accounts order by aid;
+    select 'branches:'::text, bid, bbalance, filler
+        from branches order by bid;
+    select 'tellers:'::text, tid, bid, tbalance, filler
+        from tellers order by tid;
+    select 'history:'::text, tid, bid, aid, delta, mtime, filler,
+        "_Slony-I_${CLUSTER}_rowID"
+        from history order by "_Slony-I_${CLUSTER}_rowID";
+_EOF_
+
+if diff dump.tmp.1.$$ dump.tmp.2.$$ >test_1.diff ; then
+    echo "success - databases are equal."
+    rm dump.tmp.?.$$
+    rm test_1.diff
+else
+    echo "FAILED - see test_1.diff for database differences"
+fi
+
+If this script reports any differences, it is worth reporting this to
+the developers as we would appreciate hearing how this happened.
+
--- /dev/null
+++ INSTALL
@@ -0,0 +1,66 @@
+Building and Installing Slony-I
+------------------------------------
+
+$Id: INSTALL,v 1.1 2005/04/15 19:33:15 cbbrowne Exp $
+
+Slony currently supports PostgreSQL 7.3.3 (and higher),  7.4.x, and 8.x.
+
+Slony normally needs to be built and installed by the PostgreSQL Unix
+user.  The installation target must be identical to the existing
+PostgreSQL installation particularly in view of the fact that several
+Slony-I components represent libraries and SQL scripts that need to be
+in the PostgreSQL lib and share directories.
+
+On certain platforms (AIX and Solaris are known to need this),
+PostgreSQL must be configured with the option --enable-thread-safety
+to provide correct client libraries.
+
+The location of the PostgreSQL source-tree was specified with the
+configure option --with-pgsourcetree=<dir>. As of 1.1, this is no longer
+necessary; instead, locations of database components are specified
+individually, such as:
+
+--with-pgconfigdir=<dir>        Location of the PostgreSQL pg_config program.
+--with-pgbindir=<dir>           Location of the PostgreSQL postmaster.
+--with-pgincludedir=<dir>       Location of the PostgreSQL headers.
+--with-pgincludeserverdir=<dir> Location of the PostgreSQL server headers.
+--with-pglibdir=<dir>           Location of the PostgreSQL libs.
+--with-pgpkglibdir=<dir>        Location of the PostgreSQL pkglibs. E.g. plpgsql.so
+--with-pgsharedir=<dir>         Location of the PostgreSQL share dir. E.g. postgresql.conf.sample
+
+PostgreSQL version 8 installs the server header #include files by
+default; with version 7.4 and earlier, you need to make sure that the
+build installation included doing "make install-all-headers",
+otherwise the server headers will not be installed, and Slony-I will
+be unable to compile.
+
+The main list of files installed within the PostgreSQL instance is:
+
+    * $bindir/slon
+    * $bindir/slonik
+    * $libdir/slony1_funcs$(DLSUFFIX)
+    * $libdir/xxid($DLSUFFIX)
+    * $datadir/slony1_base.sql
+    * $datadir/slony1_base.v73.sql
+    * $datadir/slony1_base.v74.sql
+    * $datadir/slony1_funcs.sql
+    * $datadir/slony1_funcs.v73.sql
+    * $datadir/slony1_funcs.v74.sql
+    * $datadir/xxid.v73.sql 
+
+The .sql files are not fully substituted yet.  And yes, both the 7.3
+and the 7.4 files get installed on a system, irrespective of its
+version.  The slonik admin utility does namespace/cluster
+substitutions within the files, and loads them files when creating
+replication nodes.  At that point in time, the database being
+initialized may be remote and may run a different version of
+PostgreSQL than that of the local host.
+
+At the very least, the two shared objects installed in the $libdir
+directory must be installed onto every computer that is supposed to
+become a Slony node. (Other components may be able to be invoked
+remotely from other hosts.)
+
+If you wish to have the "altperl" administration tools available, you
+need to specify the "--with-perltools=somewhere" option.
+
Index: README
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/README,v
retrieving revision 1.10
retrieving revision 1.11
diff -LREADME -LREADME -u -w -r1.10 -r1.11
--- README
+++ README
@@ -1,369 +1,36 @@
-A Slony-I example step by step
+Slony-I
+------------------
+$Id$
+
+Slony-I is a "master to multiple slaves" replication system with
+cascading and failover.
+
+The big picture for the development of Slony-I is a master-slave
+system that includes all features and capabilities needed to replicate
+large databases to a reasonably limited number of slave systems.
 
-Table of Contents:
+Slony-I is a system for data centers and backup sites, where the
+normal mode of operation is that all nodes are available.
 
 1.  Build and install
-2.  Creating a sample database with application
-3.  Configuring the databases for replication
-4.  Time to replicate
-5.  Checking the result
-6.  If I run into problems...
-
-And now, the contents.
 
-1.  Build and install
+    See the INSTALL file nearby
 
-Slony currently supports PostgreSQL 7.3.3 (and higher),  7.4.x, and 8.x.
+2.  Creating a sample database with application
 
-Slony must be built and installed by the PostgreSQL Unix user. The
-installation target must be identical to the existing PostgreSQL
-installation. [[A copy of the original PostgreSQL source tree used to be
-necessary; no longer so for 1.1...]]
-
-On certain platforms (AIX and Solaris are amongst these), PostgreSQL
-must be configured with the option --enable-thread-safety to provide
-correct client libraries.
-
-The location of the PostgreSQL source-tree was specified with the
-configure option --with-pgsourcetree=<dir>. As of 1.1, this is no longer
-necessary; instead, locations of database components are specified
-individually, such as:
-
---with-pgconfigdir=<dir>        Location of the PostgreSQL pg_config program.
---with-pgbindir=<dir>           Location of the PostgreSQL postmaster.
---with-pgincludedir=<dir>       Location of the PostgreSQL headers.
---with-pgincludeserverdir=<dir> Location of the PostgreSQL server headers.
---with-pglibdir=<dir>           Location of the PostgreSQL libs.
---with-pgpkglibdir=<dir>        Location of the PostgreSQL pkglibs. E.g. plpgsql.so
---with-pgsharedir=<dir>         Location of the PostgreSQL share dir. E.g. postgresql.conf.sample
-
-PostgreSQL version 8 installs the server header #include files by
-default; with version 7.4 and earlier, you need to make sure that the
-build included doing "make install-all-headers", otherwise the server
-headers will not be installed.
-
-The complete list of files installed is:
-
-    * $bindir/slon
-    * $bindir/slonik
-    * $libdir/slony1_funcs$(DLSUFFIX)
-    * $libdir/xxid($DLSUFFIX)
-    * $datadir/slony1_base.sql
-    * $datadir/slony1_base.v73.sql
-    * $datadir/slony1_base.v74.sql
-    * $datadir/slony1_funcs.sql
-    * $datadir/slony1_funcs.v73.sql
-    * $datadir/slony1_funcs.v74.sql
-    * $datadir/xxid.v73.sql 
-
-The .sql files are not fully substituted yet.  And yes, both the 7.3 and
-the 7.4 files get installed on a system, irrespective of its version.
-The slonik admin utility does namespace/cluster substitutions within the
-files, and loads them files when creating replication nodes.  At that
-point in time, the database being initialized may be remote and may run
-a different version of PostgreSQL than that of the local host.
-
-At the very least, the two shared objects installed in the $libdir
-directory must be installed onto every computer that is supposed to
-become a Slony node. (Other components may be able to be invoked
-remotely from other hosts.)
+    See the SAMPLE file nearby
 
-2.  Creating a sample database with application
+3.  Help!  I ran into a problem.
 
-As a first test scenario, the pgbench test program that is shipped with
-PostgreSQL in the ./contrib directory and produces a not too light load
-of concurrent transactions will satisfy our needs.
-
-NOTE: the PL/PgSQL procedural language MUST BE INSTALLED into this
-database
-
-LOCAL WILL BE REMOTE
-
-The Slony replication system is based on triggers.  One of the nice
-side effects of this is that you may, in theory, replicate between two
-databases under the same postmaster.  Things can get a little
-confusing when we're talking about local vs. remote database in that
-context.  To avoid confusion, from here on we will strictly use the
-term "node" to mean one database and its replication daemon program
-slon.
-
-To make this example work for people with one or two computer systems
-at their disposal, we will define a few shell variables that will be
-used throughout the following scripts.
-
-CLUSTER=test1
-DBNAME1=pgbench_node1
-DBNAME2=pgbench_node2
-HOST1=<host name of pgbench_node1>
-HOST2=<host name of pgbench_node2>
-SLONY_USER=<PostgreSQL superuser to connect as for replication>
-PGBENCH_USER=<normal user to run the pgbench application>
-
-Here, we assume that the the Unix user executing all the commands in
-this example has the ability to establish all database connections.
-This is not intended to become a "pg_hba.conf HOWTO", so replacing
-whatever sophisticated authentication method is used with "trust"
-until replication works would not be a bad idea.
-
-PREPARING THE TWO DATABASES
-
-As of this writing Slony-I does not attempt to automatically copy the
-table definitions when a node subscribes to a data set.  Because of this,
-we have to create one full and one schema-only pgbench database.
-
-createdb -O $PGBENCH_USER -h $HOST1 $DBNAME1
-createdb -O $PGBENCH_USER -h $HOST2 $DBNAME2
-pgbench -i -s 1 -U $PGBENCH_USER -h $HOST1 $DBNAME1
-pg_dump -s -U postgres -h $HOST1 $DBNAME1 | psql -U postgres -h $HOST2 $DBNAME2
-
-From this moment on, the pgbench test program can be started to
-produce transaction load.  It is recommended to run the pgbench
-application in the foreground of a separate terminal.  This gives the
-flexibility to stop and restart it with different parameters at any
-time.  The command to run it would look like this:
-
-pgbench [-n] -s 1 -c <n_clients> -U $PGBENCH_USER -h $HOST1 -t <n_trans> $DBNAME1
-
-    * -n suppresses deleting the content of the history table and
-    * vacuuming the database at the start of pgbench.
-    * -c <n_clients> specifies the number of concurrent clients to
-    * simulate (should be between 1 and 10).  Note that a high number
-    * will cause a significant load on the server as any of these
-    * clients will try to run transactions as fast as possible.
-    * -t <n_trans> specifies the number of transactions every client
-    * executes before terminating.  A value of 1000 is a good point to
-    * start. 
-
-3.  Configuring the databases for replication
-
-Creating the configuration tables, stored procedures, triggers and
-setting up the configuration is done with the slonik command.  It is a
-specialized scripting aid that mostly calls stored procedures in the
-node databases.  The script to create the initial configuration for a
-simple master-slave setup of our pgbench databases looks like this:
-Script slony_sample1_setup.sh
-
-#!/bin/sh
-
-CLUSTER=test1
-DBNAME1=pgbench_node1
-DBNAME2=pgbench_node2
-HOST1=<host name of pgbench_node1>
-HOST2=<host name of pgbench_node2>
-SLONY_USER=<postgres superuser to connect as for replication>
-PGBENCH_USER=<normal user to run the pgbench application>
-
-slonik <<_EOF_
-    # ----
-    # This defines which namespace the replication system uses
-    # ----
-    cluster name = $CLUSTER;
-
-    # ----
-    # Admin conninfo's are used by the slonik program to connect
-    # to the node databases.  So these are the PQconnectdb arguments
-    # that connect from the administrators workstation (where
-    # slonik is executed).
-    # ----
-    node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
-    node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
-
-    # ----
-    # Initialize the first node.  The id must be 1.
-    # This creates the schema "_test1" containing all replication
-    # system specific database objects.
-    # ----
-    init cluster ( id = 1, comment = 'Node 1' );
-
-    # ----
-    # The pgbench table history does not have a primary key or
-    # any other unique constraint that could be used to identify
-    # a row.  The following command adds a bigint column named
-    # "_Slony-I_test1_rowID" to the table.  It will have a default
-    # value of nextval('"_test1".sl_rowid_seq'), be unique and not
-    # null.  All existing rows will be initialized with a number.
-    # ----
-    table add key ( node id = 1, fully qualified name = 'public.history' );
-
-    # ----
-    # The Slony replication system organizes tables in sets.  The
-    # smallest unit another node can subscribe is a set.  Usually the
-    # tables contained in one set would be all tables that have
-    # relationships to each other.  The following commands create
-    # one set containing all 4 pgbench tables.  The "master" or origin
-    # of the set is node 1.
-    # ----
-    create set ( id = 1, origin = 1, comment = 'All pgbench tables' );
-    set add table ( set id = 1, origin = 1,
-        id = 1, fully qualified name = 'public.accounts',
-        comment = 'Table accounts' );
-    set add table ( set id = 1, origin = 1,
-        id = 2, fully qualified name = 'public.branches',
-        comment = 'Table branches' );
-    set add table ( set id = 1, origin = 1,
-        id = 3, fully qualified name = 'public.tellers',
-        comment = 'Table tellers' );
-    set add table ( set id = 1, origin = 1,
-        id = 4, fully qualified name = 'public.history',
-        key = serial,
-        comment = 'Table history' );
-
-    # ----
-    # Create the second node, tell the two nodes how to connect to 
-    # each other and that they should listen for events on each
-    # other.  Note that these conninfo arguments are used by the
-    # slon daemon on node 1 to connect to the database of node 2
-    # and vice versa.  So if the replication system is supposed to
-    # use a separate backbone network between the database servers,
-    # this is the place to tell it.
-    # ----
-    store node ( id = 2, comment = 'Node 2' );
-    store path ( server = 1, client = 2,
-        conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER');
-    store path ( server = 2, client = 1,
-        conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER');
-    store listen ( origin = 1, provider = 1, receiver = 2 );
-    store listen ( origin = 2, provider = 2, receiver = 1 );
-_EOF_
-
-4.  Time to replicate
-
-Is the pgbench application still running?
-
-At this point we have 2 databases that are fully prepared.  One is the
-master database accessed by the pgbench application.  It is time now
-to start the replication daemons.
-
-On the system $HOST1, the command to start the replication daemon is
-
-slon $CLUSTER "dbname=$DBNAME1 user=$SLONY_USER"
-
-Since the replication daemon for node 1 is running on the same host as
-the database for node 1, there is no need to connect via TCP/IP socket
-for it.
-
-Likewise we start the replication daemon for node 2 on $HOST2 with
-
-slon $CLUSTER "dbname=$DBNAME2 user=$SLONY_USER"
-
-Even if the two daemons now will start right away and show a lot of
-message exchanging, they are not replicating any data yet.  What is
-going on is that they synchronize their information about the cluster
-configuration.
-
-To start replicating the 4 pgbench tables from node 1 to node 2 we
-have to execute the following script:
-
-slony_sample1_subscribe.sh:
-
-#!/bin/sh
-
-CLUSTER=test1
-DBNAME1=pgbench_node1
-DBNAME2=pgbench_node2
-HOST1=<host name of pgbench_node1>
-HOST2=<host name of pgbench_node2>
-SLONY_USER=<postgres superuser to connect as for replication>
-PGBENCH_USER=<normal user to run the pgbench application>
-
-slonik <<_EOF_
-    # ----
-    # This defines which namespace the replication system uses
-    # ----
-    cluster name = $CLUSTER;
-
-    # ----
-    # Admin conninfo's are used by the slonik program to connect
-    # to the node databases.  So these are the PQconnectdb arguments
-    # that connect from the administrators workstation (where
-    # slonik is executed).
-    # ----
-    node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
-    node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
-
-    # ----
-    # Node 2 subscribes set 1
-    # ----
-    subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
-_EOF_
-
-Shortly after this script is executed, the replication daemon on
-$HOST2 will start to copy the current content of all 4 replicated
-tables.  While doing so, of course, the pgbench application will
-continue to modify the database.  When the copy process is finished,
-the replication daemon on $HOST2 will start to catch up by applying
-the accumulated replication log.  It will do this in little steps, 10
-seconds worth of application work at a time.  Depending on the
-performance of the two systems involved, the sizing of the two
-databases, the actual transaction load and how well the two databases
-are tuned and maintained, this catchup process can be a matter of
-minutes, hours, or infinity.
-
-5.  Checking the result
-
-To check the result of the replication attempt (actually, the
-intention was to create an exact copy of the first node, no?) the
-pgbench application must be stopped and any eventual replication
-backlog processed by node 2.  After that, we create data exports (with
-ordering) of the 2 databases and compare them:
-
-Script slony_sample1_compare.sh
-
-#!/bin/sh
-
-CLUSTER=test1
-DBNAME1=pgbench_node1
-DBNAME2=pgbench_node2
-HOST1=<host name of pgbench_node1>
-HOST2=<host name of pgbench_node2>
-SLONY_USER=<postgres superuser to connect as for replication>
-PGBENCH_USER=<normal user to run the pgbench application>
-
-echo -n "**** comparing sample1 ... "
-psql -U $PGBENCH_USER -h $HOST1 $DBNAME1 >dump.tmp.1.$$ <<_EOF_
-    select 'accounts:'::text, aid, bid, abalance, filler
-        from accounts order by aid;
-    select 'branches:'::text, bid, bbalance, filler
-        from branches order by bid;
-    select 'tellers:'::text, tid, bid, tbalance, filler
-        from tellers order by tid;
-    select 'history:'::text, tid, bid, aid, delta, mtime, filler,
-        "_Slony-I_${CLUSTER}_rowID"
-        from history order by "_Slony-I_${CLUSTER}_rowID";
-_EOF_
-psql -U $PGBENCH_USER -h $HOST2 $DBNAME2 >dump.tmp.2.$$ <<_EOF_
-    select 'accounts:'::text, aid, bid, abalance, filler
-        from accounts order by aid;
-    select 'branches:'::text, bid, bbalance, filler
-        from branches order by bid;
-    select 'tellers:'::text, tid, bid, tbalance, filler
-        from tellers order by tid;
-    select 'history:'::text, tid, bid, aid, delta, mtime, filler,
-        "_Slony-I_${CLUSTER}_rowID"
-        from history order by "_Slony-I_${CLUSTER}_rowID";
-_EOF_
-
-if diff dump.tmp.1.$$ dump.tmp.2.$$ >test_1.diff ; then
-    echo "success - databases are equal."
-    rm dump.tmp.?.$$
-    rm test_1.diff
-else
-    echo "FAILED - see test_1.diff for database differences"
-fi
-
-If this script reports any differences, I would appreciate hearing how
-this happened.  
-
-6.  Help!  I ran into a problem.
-
-The file in the documentation area named "helpitsbroken.txt" contains
-a variety of notes describing problems that people have run into in
-setting up Slony-I instances.  
+    The file in the documentation area named "helpitsbroken.txt"
+    contains a variety of notes describing problems that people have
+    run into in setting up Slony-I instances.
 
-It may be worth checking there to see if the problem you are having
-has already been documented and diagnosed by someone else.
+    It may be worth checking there to see if the problem you are
+    having has already been documented and diagnosed by someone else.
 
-Please contact me as JanWieck at Yahoo.com or on the IRC channel #slony
-on freenode.net.
+You may also wish to contact the Slony-I mailing list, findable at
+<http://slony.info/>. or take a peek at who might be available to chat
+on the IRC channel #slony on freenode.net.
 
-Jan Wieck
+-- Slony-I team


More information about the Slony1-commit mailing list