CVS User Account cvsuser
Thu Aug 12 22:15:24 PDT 2004
Log Message:
-----------

1.  Updated README

- It now supports PG version 8

- CVS HEAD version no longer uses "--with-pgsourcetree"; mentioned this...

- Changed the wording a bit in the triggers discussion

2.  slonik_commands.html

Typo fixes, and mentioned that index names do not need to be fully qualified with the namespace.

3.  Perl Tools

 - Generally - removed extra whitespace from try {} on error{} groupings

 - Changed create_set.pl to separate out creation of the set from adding in all the objects into separate slonik sessions

 - create_set.pl changed to have "@PKEYEDTABLES" that indicates tables
 that already have primary keys, and "%KEYEDTABLES" that indicates
 tables with candidates for primary keys where you have to specify
 which index is to be used.

 - init_cluster.pl has been changed to split more components into
 separate slonik sessions.  This is useful because if you add nodes,
 you can likely re-run init_cluster.pl, and the parts that have
 already been done will fail, but those that are NEW will indeed take
 effect, modifying the cluster's configuration.

 - Fixed up comments quite a bit
 - Eliminated redundant creations of paths

- slon-tools.pm:

  - Moved ps_args/get_pid out of the slon_start/slon_kill/watchdog scripts into one central place
  - Set up some system portability as different Un*xes invoke "ps auxww" differently
  - Have start_slon be a function here...

Modified Files:
--------------
    slony1-engine:
        README (r1.4 -> r1.5)
    slony1-engine/doc/howto:
        Makefile (r1.5 -> r1.6)
        slonik_commands.html (r1.2 -> r1.3)
    slony1-engine/tools/altperl:
        create_set.pl (r1.4 -> r1.5)
        drop_node.pl (r1.2 -> r1.3)
        drop_set.pl (r1.2 -> r1.3)
        failover.pl (r1.2 -> r1.3)
        init_cluster.pl (r1.2 -> r1.3)
        merge_sets.pl (r1.2 -> r1.3)
        reset_cluster.pl (r1.2 -> r1.3)
        restart_node.pl (r1.1 -> r1.2)
        slon-tools.pm (r1.3 -> r1.4)
        slon.env (r1.2 -> r1.3)
        slon_kill.pl (r1.2 -> r1.3)
        slon_start.pl (r1.3 -> r1.4)
        slon_watchdog.pl (r1.1 -> r1.2)

Added Files:
-----------
    slony1-engine/doc/howto:
        helpitsbroken.txt (r1.1)

Removed Files:
-------------
    slony1-engine/tools/altperl:
        uninstall_node.pl
        update_node.pl

-------------- next part --------------
Index: README
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/README,v
retrieving revision 1.4
retrieving revision 1.5
diff -LREADME -LREADME -u -w -r1.4 -r1.5
--- README
+++ README
@@ -12,15 +12,19 @@
 
 1.  Build and install
 
-Slony currently supports PostgreSQL 7.3.x and 7.4.x.
+Slony currently supports PostgreSQL 7.3.x, 7.4.x, and 8.x.
 
 Slony must be built and installed by the PostgreSQL Unix user.  The
 installation target will be identical to the existing PostgreSQL
 installation and the original PostgreSQL source tree must be
-available.  On certain platforms (AIX is one of these), PostgreSQL must
-be configured with the option --enable-thread-safety to provide
-correct client libraries.  The location of the PostgreSQL source-tree is
-specified with the configure option --with-pgsourcetree=<dir>.
+available.  On certain platforms (AIX and Solaris are amongst these),
+PostgreSQL must be configured with the option --enable-thread-safety
+to provide correct client libraries.  
+
+The location of the PostgreSQL source-tree is specified with the
+configure option --with-pgsourcetree=<dir>.  [[ An effort is ongoing
+to eliminate the need for the source tree; AIX is a conspicuous
+platform where this doesn't yet work... ]]
 
 The complete list of files installed is:
 
@@ -58,11 +62,12 @@
 LOCAL WILL BE REMOTE
 
 The Slony replication system is based on triggers.  One of the nice
-side effects of this is that you can replicate between two databases
-under the same postmaster.  Things can get a little confusing when
-we're talking about local vs. remote database in that context.  To
-avoid confusion, from here on we will strictly use the term "node" to
-mean one database and its replication daemon program slon.
+side effects of this is that you may, in theory, replicate between two
+databases under the same postmaster.  Things can get a little
+confusing when we're talking about local vs. remote database in that
+context.  To avoid confusion, from here on we will strictly use the
+term "node" to mean one database and its replication daemon program
+slon.
 
 To make this example work for people with one or two computer systems
 at their disposal, we will define a few shell variables that will be
--- /dev/null
+++ doc/howto/helpitsbroken.txt
@@ -0,0 +1,94 @@
+Help!  It's broken!
+------------------------------
+
+You're having trouble getting it to work, and are scratching your head
+as to what might be wrong.
+
+Here are some things other people have stumbled over that might help
+you to "stumble more quickly."
+
+1.  I looked for the _clustername namespace, and it wasn't there!
+
+If the DSNs are wrong, then slon instances can't connect to the nodes.
+
+This will generally lead to nodes remaining entirely untouched.
+
+Recheck the connection configuration.  By the way, since slon links to
+libpq, you could have password information stored in $HOME/.pgpass,
+partially filling in right/wrong authentication information there.
+
+2.  Everything in my script _looks_ OK, and some data is getting
+pushed around, but not all of it.
+
+Slony logs might look like the following:
+
+DEBUG1 remoteListenThread_1: connected to 'host=host004 dbname=pgbenchrep user=postgres port=5432'
+ERROR  remoteListenThread_1: "select ev_origin, ev_seqno, ev_timestamp,        ev_minxid, ev_maxxid, ev_xip,        ev_type,        ev_data1, ev_data2,        ev_data3, ev_data4,        ev_data5, ev_data6,        ev_data7, ev_data8 from "_pgbenchtest".sl_event e where (e.ev_origin = '1' and e.ev_seqno > '1') order by e.ev_origin, e.ev_seqno" - could not receive data from server: Operation now in progress
+
+On AIX and Solaris (and possibly elsewhere), both Slony-I _and
+PostgreSQL_ must be compiled with the --enable-thread-safety option.
+The above results when PostgreSQL isn't so compiled.
+
+What happens is that the libc (threadsafe) and libpq (non-threadsafe)
+use different memory locations for errno, thereby leading to the
+request failing.
+
+3.  I tried creating a CLUSTER NAME with a "-" in it.  That didn't work.
+
+Slony-I uses the same rules for unquoted identifiers as the PostgreSQL
+main parser, so no, you probably shouldn't put a "-" in your
+identifier name.
+
+You may be able to defeat this by putting "quotes" around identifier
+names, but it's liable to bite you somewhere...
+
+
+4.  After an immediate stop of postgresql (simulation of system crash)
+in pg_catalog.pg_listener a tuple with
+relname='_${cluster_name}_Restart' exists. slon doesn't start cause it
+thinks another process is serving the cluster on this node.  What can
+I do? The tuples can't be droped from this relation.
+
+Answer:  
+
+Before starting slon, do a 'restart node'. PostgreSQL tries to notify
+the listeners and drop those are not anwsering. Slon then starts
+cleanly.
+
+5.  If I run a "ps" command, I, and everyone else, can see passwords
+on the command line.
+
+Take the passwords out of the Slony configuration, and put them into
+$(HOME)/.pgpass.
+
+6.  When I run the sample setup script I get an error message similar
+to:
+
+<stdin>:64: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  LOAD:
+could not open file '$libdir/xxid': No such file or directory
+
+Evidently, you haven't got the xxid.so library in the $libdir
+directory that the PostgreSQL instance is using.  Note that the Slony
+components need to be installed on ALL of the nodes, not just on the
+"master."
+
+This may also point to there being some other mismatch between the
+PostgreSQL binary instance and the Slony-I instance.  If you compiled
+Slony-I yourself, on a machine that may have multiple PostgreSQL
+builds "lying around," it's possible that the slon or slonik binaries
+are asking to load something that isn't actually in the library
+directory for the PostgreSQL database cluster that it's hitting.
+
+Long and short: This points to a need to "audit" what installations of
+PostgreSQL and Slony you have in place on the machine(s).
+Unfortunately, just about any mismatch will cause things not to link
+up quite right.
+
+7.  An oddity - no need for Fully Qualified Name for table keys...
+
+set add table (set id = 1, origin = 1, id = 27, full qualified name = 'nspace.some_table', key = 'key_on_whatever', 
+    comment = 'Table some_table in namespace nspace with a candidate primary key');
+
+If you have
+   key = 'nspace.key_on_whatever'
+the request will FAIL.
Index: Makefile
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/howto/Makefile,v
retrieving revision 1.5
retrieving revision 1.6
diff -Ldoc/howto/Makefile -Ldoc/howto/Makefile -u -w -r1.5 -r1.6
--- doc/howto/Makefile
+++ doc/howto/Makefile
@@ -50,4 +50,3 @@
 
 clean:
 	@$(pgbindir)/dropdb $(TEMPDB) || echo "unable to dropdb $(TEMPDB)" 
-
Index: slonik_commands.html
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/howto/slonik_commands.html,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/howto/slonik_commands.html -Ldoc/howto/slonik_commands.html -u -w -r1.2 -r1.3
--- doc/howto/slonik_commands.html
+++ doc/howto/slonik_commands.html
@@ -568,7 +568,7 @@
 	A <b>listen</b> entry causes a node (receiver) to query an event
 	provider for events that originate from a specific node, as well
 	as confirmations from every existing node. It requires a <b>path</b>
-	to exist so that the receiver (as client> can connect to the provider
+	to exist so that the receiver (as client) can connect to the provider
 	(as server).
 </p>
 <p>
@@ -898,7 +898,8 @@
 		to be used as the row identifier for replication purposes. Or the
 		keyword SERIAL to use the special column added with a previous
 		<a href="#table_add_key">TABLE ADD KEY</a> command. Default
-		is to use the tables primary key.
+		is to use the table's primary key.  The index name is <i> not </i> 
+		fully qualified; you must omit the namespace.
 	</p></td>
 </tr>
 <tr>
Index: slon-tools.pm
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/slon-tools.pm,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ltools/altperl/slon-tools.pm -Ltools/altperl/slon-tools.pm -u -w -r1.3 -r1.4
--- tools/altperl/slon-tools.pm
+++ tools/altperl/slon-tools.pm
@@ -81,11 +81,48 @@
   open(OUT, ">>$LOGDIR/slonik_scripts.log");
   my $now = `date`;
   chomp $now;
-  print OUT "-- Script: $script submitted at $now\n";
-  print OUT "-------------------------------------------------------------\n";
+  print OUT "/* Script: $script submitted at $now */\n";
+  print OUT "/* ------------------------------------------------------------- */\n";
   close OUT;
   `cat $script >> $LOGDIR/slonik_scripts.log`;
   print `slonik < $script`;
   unlink($script);
 }
+
+sub ps_args {
+  my $sys=`uname`;
+  if ($sys eq "Linux") {
+    return "/bin/ps -auxww";
+  } elsif ($sys eq "FreeBSD") {
+    return "/bin/ps -auxww";
+  } elsif ($sys eq "SunOS") {
+    return "/usr/ucb/ps -auxww";
+  } elsif ($sys eq "AIX") {
+    return "/usr/bin/ps auxww";
+  } 
+  return "/usr/bin/ps -auxww";    # This may be questionable for other systems; extend as needed!    
+}
+
+sub get_pid {
+  my ($node) = @_;
+  $node =~ /node(\d*)$/;
+  my $nodenum = $1;
+  my $pid;
+  my ($dbname, $dbport, $dbhost) = ($DBNAME[$nodenum], $PORT[$nodenum], $HOST[$nodenum]);
+  #  print "Searching for PID for $dbname on port $dbport\n";
+  open(PSOUT, ps_args() . "| egrep \"[s]lon $SETNAME\" | egrep \"host=$dbhost dbname=$dbname.*port=$dbport\" | sort -n | awk '{print \$2}'|");
+  while ($pid = <PSOUT>) {
+    chop $pid;
+  }
+  close(PSOUT);
+  return $pid;
+}
+
+sub start_slon {
+  my ($nodenum) = @_;
+  my ($dsn, $dbname) = ($DSN[$nodenum], $DBNAME[$nodenum]);
+  my $cmd = "$SLON_BIN_PATH/slon -s 1000 -d2  $SETNAME '$dsn' 2>$LOGDIR/slon-$dbname-node$nodenum.err >$LOGDIR/slon-$dbname-$nodenum.out &";
+  print "Invoke slon: $cmd\n";
+  system $cmd;
+}
 return 1;
Index: merge_sets.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/merge_sets.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/merge_sets.pl -Ltools/altperl/merge_sets.pl -u -w -r1.2 -r1.3
--- tools/altperl/merge_sets.pl
+++ tools/altperl/merge_sets.pl
@@ -34,8 +34,7 @@
 print SLONIK qq[
         try {
                 merge set (id = $set1, add id = $set2, origin = $node);
-        }
-        on error {
+} on error {
                 echo 'Failure to merge sets $set1 and $set2 with origin $node';
                 exit 1;
         }
Index: drop_node.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/drop_node.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/drop_node.pl -Ltools/altperl/drop_node.pl -u -w -r1.2 -r1.3
--- tools/altperl/drop_node.pl
+++ tools/altperl/drop_node.pl
@@ -20,8 +20,7 @@
 print SLONIK qq{
         try {
                 drop node (id = $node);
-        }
-        on error {
+} on error {
                 echo 'Failed to drop node $node from cluster';
                 exit 1;
         }
Index: create_set.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/create_set.pl,v
retrieving revision 1.4
retrieving revision 1.5
diff -Ltools/altperl/create_set.pl -Ltools/altperl/create_set.pl -u -w -r1.4 -r1.5
--- tools/altperl/create_set.pl
+++ tools/altperl/create_set.pl
@@ -37,10 +37,9 @@
 
 print OUTFILE "
 	try {
-		create set (id = $set, origin = 1, comment = 'Set for slony tables');
-	}
-	on error {
-		echo 'Could not create subscription set!';
+      create set (id = $set, origin = 1, comment = 'Set $set for $SETNAME');
+} on error {
+      echo 'Could not create subscription set $set for $SETNAME!';
 		exit -1;
 	}
 ";
@@ -51,7 +50,7 @@
 open (OUTFILE, ">$OUTPUTFILE");
 print OUTFILE genheader();
 print OUTFILE "
-	echo 'Subscription set created';
+	echo 'Subscription set $set created';
 	echo 'Adding tables to the subscription set';
 
 ";
@@ -60,17 +59,27 @@
 foreach my $table (@SERIALTABLES) {
   $table = ensure_namespace($table);
   print OUTFILE "
-		set add table (set id = $set, origin = 1, id = $TABLE_ID, full qualified name = '$table', comment = 'Table $table', key=serial);
+		set add table (set id = $set, origin = 1, id = $TABLE_ID, full qualified name = '$table', comment = 'Table $table without primary key', key=serial);
                 echo 'Add unkeyed table $table';
 "; 
   $TABLE_ID++;
 }
 
-foreach my $table (@KEYEDTABLES) {
+foreach my $table (@PKEYEDTABLES) {
+  $table = ensure_namespace($table);
+  print OUTFILE "
+		set add table (set id = $set, origin = 1, id = $TABLE_ID, full qualified name = '$table', comment = 'Table $table with primary key');
+                echo 'Add primary keyed table $table';
+";
+  $TABLE_ID++;
+}
+
+foreach my $table (keys %KEYEDTABLES) {
   $table = ensure_namespace($table);
+  $key = $KEYEDTABLES{$table};
   print OUTFILE "
-		set add table (set id = $set, origin = 1, id = $TABLE_ID, full qualified name = '$table', comment = 'Table $table');
-                echo 'Add keyed table $table';
+                set add table (set id = $set, origin = 1, id = $TABLE_ID, full qualified name = '$table', key='$key', comment = 'Table $table with candidate primary key $key');
+                echo 'Add candidate primary keyed table $table';
 ";
   $TABLE_ID++;
 }
--- tools/altperl/update_node.pl
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/perl
-# $Id: update_node.pl,v 1.2 2004/08/03 18:15:06 cbbrowne Exp $
-# Author: Christopher Browne
-# Copyright 2004 Afilias Canada
-
-require 'slon-tools.pm';
-require 'slon.env';
-
-open(SLONIK, ">/tmp/update_nodes.$$");
-print SLONIK genheader();
-
-foreach my $node (@NODES) {
-  print SLONIK "update functions (id = $node);\n";
-};
-close SLONIK;
-print `slonik /tmp/update_nodes.$$`;
--- tools/altperl/uninstall_node.pl
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/usr/bin/perl
-# $Id: uninstall_node.pl,v 1.1 2004/07/25 04:02:51 cbbrowne Exp $
-# Author: Christopher Browne
-# Copyright 2004 Afilias Canada
-
-require 'slon-tools.pm';
-require 'slon.env';
-#use Pg;
-open(SLONIK, "|slonik");
-print SLONIK genheader();
-print SLONIK qq{
-	uninstall node (id=1);
-};
-close SLONIK;
-
-foreach my $node (@NODES) {
-    foreach my $command ("drop schema _$SETNAME cascade;") {
-	print $command, "\n";
-	print `echo "$command" | psql -h $HOST[$node] -U $USER[$node] -d $DBNAME[$node] -p $PORT[$node]`;
-    }
-    foreach my $t (@SERIALTABLES) {
-	my $command = "alter table $t drop column \\\"_Slony-I_" . $SETNAME . "_rowID\\\";";
-	print $command, "\n";
-	print `echo "$command" | psql -h $HOST[$node] -U $USER[$node] -d $DBNAME[$node] -p $PORT[$node]`;
-    }
-}
Index: slon_watchdog.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/slon_watchdog.pl,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ltools/altperl/slon_watchdog.pl -Ltools/altperl/slon_watchdog.pl -u -w -r1.1 -r1.2
--- tools/altperl/slon_watchdog.pl
+++ tools/altperl/slon_watchdog.pl
@@ -13,32 +13,27 @@
   die "Usage: ./slon_watchdog node sleep-time\n";
 }
 
-slon_watchdog();
+if ($node =~/^node(\d+)$/) {
+  $nodenum = $1;
+}
+
+slon_watchdog($node, $nodenum);
 
 sub slon_watchdog {
-  get_pid();
+  my ($node, $nodenum) = @_;
+  $pid = get_pid($node);
   if (!($pid)) {
-    if ($node eq "node1") {
-      open (SLONLOG, ">>$LOGDIR/slon-$DBNAME1.out");
-      print SLONLOG "WATCHDOG: No Slon is running for set $SETNAME!\n";
+    my ($dsn, $dbname) = ($DSN[$nodenum], $DBNAME[$nodenum]);
+    open (SLONLOG, ">>$LOGDIR/slon-$dbname-$node.err");
+    print SLONLOG "WATCHDOG: No Slon is running for node $node!\n";
       print SLONLOG "WATCHDOG: You ought to check the postmaster and slon for evidence of a crash!\n";
       print SLONLOG "WATCHDOG: I'm going to restart slon for $node...\n";
-      #first restart the node
-      system "./restart_node.sh";
-      system "$SLON_BIN_PATH/slon $SETNAME -s 1000 -d2 'dbname=$DBNAME1 port=$DBPORT1' 2>$LOGDIR/slon-$DBNAME1.err >$LOGDIR/slon-$DBNAME1.out &";
-      get_pid();
+    # First, restart the node using slonik
+    system "./restart_node.sh $node";
+    # Next, restart the slon process to service the node
+    start_slon($nodenum);
+    $pid = get_pid($node);
       print SLONLOG "WATCHDOG: Restarted slon for set $SETNAME, PID $pid\n";
-    } elsif ($node eq "node2") {
-      open (SLONLOG, ">>$LOGDIR/slon-$DBNAME2.out");
-      print SLONLOG "WATCHDOG: No Slon is running for set $SETNAME!\n";
-      print SLONLOG "WATCHDOG: You ought to check the postmaster and slon for evidence of a crash!\n";
-      print SLONLOG "WATCHDOG: I'm going to restart slon for $node...\n";
-      #first restart the node
-      system "./restart_node.sh";
-      system "$SLON_BIN_PATH/slon $SETNAME -s 1000 -d2 'dbname=$DBNAME2 port=$DBPORT2' 2>$LOGDIR/slon-$DBNAME2.err >$LOGDIR/slon-$DBNAME2.out &";
-      get_pid();
-      print SLONLOG "Restarted slon for set $SETNAME, PID $pid\n";
-    }
   } else {
     open(LOG, ">>$LOGDIR/slon_watchdog.log");
     print LOG "\n";
@@ -51,10 +46,3 @@
   sleep $sleep;
   slon_watchdog();
 }
-
-sub get_pid {
-  open(PSOUT, "ps -auxww | grep -v grep | grep \"slon $SETNAME\" | sort -n | awk '{print \$2}'|");
-  $pid = <PSOUT>;
-  chop $pid;
-  close(PSOUT);
-}
Index: init_cluster.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/init_cluster.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/init_cluster.pl -Ltools/altperl/init_cluster.pl -u -w -r1.2 -r1.3
--- tools/altperl/init_cluster.pl
+++ tools/altperl/init_cluster.pl
@@ -14,22 +14,22 @@
 
 my ($dbname, $dbhost)=($DBNAME[1], $HOST[1]);
 print SLONIK "
-try {
-   init cluster (id = 1, comment = 'Node $dbname@$dbhost');
+   init cluster (id = 1, comment = 'Node $dbname\@$dbhost');
 ";
+close SLONIK;
+run_slonik_script($FILE);
+
+open(SLONIK, ">$FILE");
+print SLONIK genheader();
 
 foreach my $node (@NODES) {
     if ($node > 1) {  # skip the first one; it's already initialized!
 	my ($dbname, $dbhost) = ($DBNAME[$node], $HOST[$node]);
-	print SLONIK "   store node (id = $node, comment = 'Node $dbname@$dbhost');\n";
+    print SLONIK "   store node (id = $node, comment = 'Node $node - $dbname\@$dbhost');\n";
     }
 }
 
-print SLONIK "} on error {
-        echo 'Could not set up all nodes as slonik nodes';
-        exit 1;
-}
-echo 'Set up replication nodes';
+print SLONIK "echo 'Set up replication nodes';
 ";
 close SLONIK;
 run_slonik_script($FILE);
@@ -50,8 +50,14 @@
 	  my $dsnb = $DSN[$nodeb];
 	  my $providerba = $VIA[$nodea][$nodeb];
 	  my $providerab = $VIA[$nodeb][$nodea];
+      if (!$printed[$nodea][$nodeb]) {
 	  print SLONIK "      store path (server = $nodea, client = $nodeb, conninfo = '$dsna');\n";
+	$printed[$nodea][$nodeb] = "done";
+      }
+      if (!$printed[$nodeb][$nodea]) {
 	  print SLONIK "      store path (server = $nodeb, client = $nodea, conninfo = '$dsnb');\n";
+	$printed[$nodeb][$nodea] = "done";
+      }
 	  print SLONIK "echo 'configured path between $nodea and $nodeb';\n";
       }
   }
Index: slon.env
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/slon.env,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/slon.env -Ltools/altperl/slon.env -u -w -r1.2 -r1.3
--- tools/altperl/slon.env
+++ tools/altperl/slon.env
@@ -30,102 +30,31 @@
   # add_node(host => 'marge', dbname=>'flexnodee', port=>5532,user=>'postgres',
   #  	 password=>'postgres', node=>7, parent=>6, noforward=>'no');
 
-  # These are the tables that already have unique keys, that therefore do
+  # These are the tables that already have primary keys, that therefore do
   # not need for Slony-I to add sequences/indices
-  @KEYEDTABLES=(
-		"balance_history",
-		"billing_account",
-		"bl_update_reason",
-		"epp_activity",
-		"epp_contact",
-		"epp_contact_map",
-		"epp_contact_status",
-		"epp_dns_update",
-		"epp_domain",
-		"epp_domain_contact",
-		"epp_domain_host",
-		"epp_domain_protocol",
-		"epp_domain_protocol_history",
-		"epp_domain_status",
-		"epp_domain_trn_contact",
-		"epp_domain_trn_registrant",
-		"epp_host",
-		"epp_host_ip",
-		"epp_host_status",
-		"epp_poll_queue",
-		"epp_registrar",
-		"epp_registrar_contact",
-		"epp_registrar_notification",
-		"epp_registrar_role",
-		"epp_registrar_status",
-		"epp_registrar_tld",
-		"epp_registrar_zone",
-		"epp_role",
-		"epp_server",
-		"epp_trans_log",
-		"epp_trans_reason",
-		"epp_user",
-		"fee_schedule",
-		"flex_tld",
-		"flex_zone",
-		"idn_script"
+  @PKEYEDTABLES=(
+	       ); 
+
+  # These are tables with candidate primary keys; we assume Slony
+  # isn't smart enough (yet) to discover the key.
+
+  %KEYEDTABLES=(
+
+		table1 => 'index_on_table1',
+		table2 => 'index_on_table2'
 	       );
 
   # Here are the tables to be replicated that do NOT have unique
   # keys, to which Slony-I will have to add a key field
   @SERIALTABLES=(
-		 "epp_registrar_ipallow",
-		 "epp_domain_renew",
-		 "billing_event_logger",
-		 "epp_registrar_low_threshold",
-		 "epp_domain_archive",
-		 "bl_update_history",
-		 "res_country",
-		 "billing_price",
-		 "epp_log_1",
-		 "epp_log_2",
-		 "epp_log_3",
-		 "epp_log_4",
-		 "epp_log_5",
-		 "epp_log_6",
-		 "epp_log_7",
-		 "epp_log_8",
-		 "epp_log_9",
-		 "billing_transaction_posted",
-		 "billing_balance_history"
 		);
 
   # These are the applications' sequences that are to be
   # replicated
   @SEQUENCES=(
-	      "reserved_names_seq",
-	      "epp_log_seq_",
-	      "whois_cachemgmt_seq",
-	      "flex_tld_id_seq",
-	      "domain_seq",
-	      "billing_seq",
-	      "domain_lock_id_seq",
-	      "bl_update_reason_id_seq",
-	      "epp_log_active_seq",
-	      "domain_id_seq",
-	      "registrar_notification_id_seq",
-	      "poll_id_seq",
-	      "host_id_seq",
-	      "fee_schedule_seq",
-	      "registrar_id_seq",
-	      "contact_id_seq",
-	      "afilias_billable_trns_seq",
-	      "trid_seq",
-	      "role_id_seq",
-	      "rpt_registrar_stats_id_seq",
-	      "whois_activity_seq",
-	      "epp_trans_log_id_seq",
-	      "epp_activity_seq",
-	      "bl_update_history_id_seq",
-	      "whois_cachemgmt_server_seq",
-	      "rrp_trid_seq",
-	      "user_id_seq",
-	      "dns_update_id_seq"
+	      "seq1",
+	      "seq2",
+	      "seq3"
 	     );
 
 }
Index: drop_set.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/drop_set.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/drop_set.pl -Ltools/altperl/drop_set.pl -u -w -r1.2 -r1.3
--- tools/altperl/drop_set.pl
+++ tools/altperl/drop_set.pl
@@ -20,8 +20,7 @@
 print SLONIK qq{
         try {
                 drop set (id = $set, origin=1);
-        }
-        on error {
+} on error {
                 exit 1;
         }
         echo 'Dropped set $set';
Index: slon_start.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/slon_start.pl,v
retrieving revision 1.3
retrieving revision 1.4
diff -Ltools/altperl/slon_start.pl -Ltools/altperl/slon_start.pl -u -w -r1.3 -r1.4
--- tools/altperl/slon_start.pl
+++ tools/altperl/slon_start.pl
@@ -6,6 +6,7 @@
 #start the slon daemon
 require 'slon-tools.pm';
 require 'slon.env';
+$SLEEPTIME=30;   # number of seconds for watchdog to sleep
 
 $node =$ARGV[0];
 
@@ -13,43 +14,31 @@
   die "Usage: ./slon_start [node]\n";
 }
 
-if ($node =~ /^node\d+$/) {
+if ($node =~ /^node(\d+)$/) {
   # Node name is in proper form
+  $nodenum = $1;
 } else {
   print "Valid node names are node1, node2, ...\n\n";
   die "Usage: ./slon_start [node]\n";
 }
 
-get_pid();
+$pid = get_pid($node);
 
 if ($pid) {
   die "Slon is already running for set $SETNAME!\n";
 }
 
-$node =~ /node(\d*)$/;
-$nodenum = $1;
 my $dsn = $DSN[$nodenum];
 my $dbname=$DBNAME[$nodenum];
-system "$SLON_BIN_PATH/slon -d2 -s 1000 $SETNAME '$dsn' 2>$LOGDIR/slon-$dbname-$node.err >$LOGDIR/slon-$dbname-$node.out &";
+start_slon($nodenum);
 
-get_pid();
+$pid = get_pid($node);
 
 if (!($pid)){
-  print "Slon failed to start for set $SETNAME!\n";
+  print "Slon failed to start for cluster $SETNAME, node $node\n";
 } else {
-  print "Slon successfully started for set $SETNAME\n";
+  print "Slon successfully started for cluster $SETNAME, node $node\n";
   print "PID [$pid]\n";
-}
-#start the watchdog process
-system " perl slon_watchdog.pl $node 30 &";
-
-sub get_pid {
-  $node =~ /node(\d*)$/;
-  my $nodenum = $1;
-  my ($dbname, $dbport, $dbhost) = ($DBNAME[$nodenum], $PORT[$nodenum], $HOST[$nodenum]);
-#  print "Searching for PID for $dbname on port $dbport\n";
-  open(PSOUT, "ps -auxww | egrep \"[s]lon $SETNAME\" | egrep \"host=$dbhost dbname=$dbname.*port=$dbport\" | sort -n | awk '{print \$2}'|");
-  $pid = <PSOUT>;
-  chop $pid;
-  close(PSOUT);
+  print "Start the watchdog process as well...\n";
+  system "perl slon_watchdog.pl $node $SLEEPTIME &";
 }
Index: reset_cluster.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/reset_cluster.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/reset_cluster.pl -Ltools/altperl/reset_cluster.pl -u -w -r1.2 -r1.3
--- tools/altperl/reset_cluster.pl
+++ tools/altperl/reset_cluster.pl
@@ -36,13 +36,12 @@
 }
 
 print SLONIK qq[
-        }
-        on error {
+} on error {
+  echo 'Remapping of cluster failed...';
                 exit 1;
         }
         echo 'Replication nodes prepared';
         echo 'Please start the replication daemon on both systems';
-
 ];
 
 close SLONIK;
Index: slon_kill.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/slon_kill.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/slon_kill.pl -Ltools/altperl/slon_kill.pl -u -w -r1.2 -r1.3
--- tools/altperl/slon_kill.pl
+++ tools/altperl/slon_kill.pl
@@ -11,7 +11,7 @@
 print "1.  Kill slon watchdogs\n";
 #kill the watchdog
 
-open(PSOUT, "ps auxww | egrep '[s]lon_watchdog' | sort -n | awk '{print \$2}'|");
+open(PSOUT, ps_args() . " | egrep '[s]lon_watchdog' | sort -n | awk '{print \$2}'|");
 $found="n";
 while ($pid = <PSOUT>) {
   chomp $pid;
@@ -19,7 +19,7 @@
     print "No slon_watchdog is running for set $SETNAME!\n";
   } else {
     $found="y";
-    system "kill $pid";
+    kill 9, $pid;
     print "slon_watchdog for set $SETNAME killed - PID [$pid]\n";
   }
 }
@@ -30,13 +30,13 @@
 print "\n2. Kill slon processes\n";
 #kill the slon daemon
 $found="n";
-open(PSOUT, "ps auxww | egrep \"[s]lon .*$SETNAME\" | sort -n | awk '{print \$2}'|");
+open(PSOUT, ps_args() . " | egrep \"[s]lon .*$SETNAME\" | sort -n | awk '{print \$2}'|");
 while ($pid = <PSOUT>) {
   chomp $pid;
   if (!($pid)) {
     print "No Slon is running for set $SETNAME!\n";
   } else {
-    system "kill -9 $pid";
+    kill 9, $pid;
     print "Slon for set $SETNAME killed - PID [$pid]\n";
     $found="y";
   }
Index: failover.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/failover.pl,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ltools/altperl/failover.pl -Ltools/altperl/failover.pl -u -w -r1.2 -r1.3
--- tools/altperl/failover.pl
+++ tools/altperl/failover.pl
@@ -32,8 +32,7 @@
 print SLONIK qq[
         try {
                 failover (id = $node1, backup node = $node2);
-        }
-        on error {
+} on error {
                 echo 'Failure to fail node $node1 over to $node2';
                 exit 1;
         }
Index: restart_node.pl
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/tools/altperl/restart_node.pl,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ltools/altperl/restart_node.pl -Ltools/altperl/restart_node.pl -u -w -r1.1 -r1.2
--- tools/altperl/restart_node.pl
+++ tools/altperl/restart_node.pl
@@ -6,13 +6,16 @@
 require 'slon-tools.pm';
 require 'slon.env';
 
-foreach my $node (@NODES) {
-    my $dsn = $DSN[$node];
-    open(SLONIK, "|slonik");
-    print SLONIK qq{
-	cluster name = $SETNAME ;
-	node $node admin conninfo = '$dsn';
-	restart node $node;
-    };
-    close SLONIK;
+my ($node) = @_;
+if ($node =~ /^node(\d+)$/) {
+  $nodenum = $node
+} else {
+  die "./restart_node nodeN\n";
 }
+my $FILE="/tmp/restart.$$";
+
+open(SLONIK, ">$FILE");
+print SLONIK genheader();
+print SLONIK "restart node $node;\n";
+close SLONIK;
+run_slonik_script($FILE);


More information about the Slony1-commit mailing list