Dan Falconer lists_slony1-general at avsupport.com
Fri Sep 7 06:47:02 PDT 2007
	Running Postgres v8.0.13, Slony 1.2.11.   In our development replication 
environment (thank God we finally got it setup), we've got the following 
setup (both running the same version of Slony + Postgres):

	Node 10 ("Cartman")
	Debian 4.0

	Node 20 ("Belial")
	SLES 10.0 (x86_64)

	I had setup Belial to be the temporary master, then attempted to move the 
origin to Cartman.  I used the "slonik_move_set" script, provided with the 
altperl tools, but decided to update it to contain the "wait for event" 
commands, as described on http://slony.info/documentation/failover.html.   
The following transcript of the session shows the problem with the 
mysterious "node 1" error (with some added linebreaks to accentuate each 
run/output):

--------------------------------- SNIP ---------------------------------
postgres at belial:~/slony> vi bin/slonik_move_set

postgres at belial:~/slony> ./bin/slonik_move_set --config 
conf/slon_tools-pl.conf 1 20 10
cluster name = dev_pl_replication;
 node 10 admin conninfo='host=192.168.10.226 dbname=pl user=postgres 
port=5432';
 node 20 admin conninfo='host=192.168.10.101 dbname=pl user=postgres 
port=5432';
  echo 'Locking down set 1 on node 20';
  lock set (id = 1, origin = 20);
  echo 'Locked down - moving it';
  wait for event (origin = 20, confirmed = 10);
  move set (id = 1, old origin = 20, new origin = 10);
  wait for event (origin = 20, confirmed = 10);
  echo 'Replication set 1 moved from node 20 to 10.  Remember to';
  echo 'update your configuration file, if necessary, to note the new 
location';
  echo 'for the set.';


postgres at belial:~/slony> ./bin/slonik_move_set --config 
conf/slon_tools-pl.conf 1 20 10 | slonik
<stdin>:7: Error: No admin conninfo provided for node 1
<stdin>:9: Error: No admin conninfo provided for node 1



postgres at belial:~/slony> ./bin/slonik_move_set --config 
conf/slon_tools-pl.conf 1 20 10 | grep "node 1"
 node 10 admin conninfo='host=192.168.10.226 dbname=pl user=postgres 
port=5432';


postgres at belial:~/slony> svn revert bin/slonik_move_set
Reverted 'bin/slonik_move_set'


postgres at belial:~/slony> ./bin/slonik_move_set --config 
conf/slon_tools-pl.conf 1 20 10
cluster name = dev_pl_replication;
 node 10 admin conninfo='host=192.168.10.226 dbname=pl user=postgres 
port=5432';
 node 20 admin conninfo='host=192.168.10.101 dbname=pl user=postgres 
port=5432';
  echo 'Locking down set 1 on node 20';
  lock set (id = 1, origin = 20);
  echo 'Locked down - moving it';
  move set (id = 1, old origin = 20, new origin = 10);
  echo 'Replication set 1 moved from node 20 to 10.  Remember to';
  echo 'update your configuration file, if necessary, to note the new 
location';
  echo 'for the set.';


postgres at belial:~/slony> ./bin/slonik_move_set --config 
conf/slon_tools-pl.conf 1 20 10 | slonik
<stdin>:4: Locking down set 1 on node 20
<stdin>:6: Locked down - moving it
<stdin>:8: Replication set 1 moved from node 20 to 10.  Remember to
<stdin>:9: update your configuration file, if necessary, to note the new 
location
<stdin>:10: for the set.
-------------------------------- /SNIP ---------------------------------
-- 
Best Regards,


Dan Falconer
"Head Geek", Avsupport, Inc. / Partslogistics.com
http://www.partslogistics.com


More information about the Slony1-general mailing list